If we ever thought about it as a race, it seems that software is going to win the race. For now. See, hardware has tough constraints. It’s limited by some fundamental physical laws and as long we all dont have quantum computer technology in our smartphones we have to figure out ways to improve the overall usability. This is where software comes into play. We do not face constraints (beside ethical maybe) in order to overcome issues.
Regarding the issue of the eye contact while video chatting with a smartphone: there are basically two ways to overcome this: First would be the hardware way: selfie camera hidden beneath the phone display. Oppo and Xiamo are already doing that (you are still able to see the camera, though). Samsung is planning hiding the camera beneath the screen.
Apple did it with software. With the update of their mobile OS: iOS13 they implemented Face Time Attention Correction. I like to reference it as the Mona Lisa effect. (Side fact: The actual Mona Lisa isn’t really looking at you at all angles, but the name of the effect is cemented.)
In its new iOS 13 update to FaceTime, Apple is experimenting with altering your face so it looks like you’re looking directly at the other person.Mark Wilson for fastcompany.com
That’s all software. Apple decided to put effort into a software solution that competitor companies try to solve with hardware. Good move. We have seen a lot coming from Googles Android lately. You remember the restaurant reservation that was done by AI?
With software we are creating virtual and artificial realities. It helps us to understand languages we don’t know and form pictures that never have been real in the first place. Also with all the deep fake software stuff we’re getting an (maybe dystopian) idea of how the future could look like. Of course it raises many important questions like what visual representation can we really trust?