Non visuals
Who are we designing technology for? Who are we leaving behind? Could you remove VISION from a technology inherently known by its visual components? These were the questions that initially kicked off my thesis - NON VISUALS.
Later realising this question about inclusion/exclusion was nonsensical, because we are all being excluded from our sensory and physical capabilities through the technology we as designers develop – one that stresses our brains and constrain our bodies more every day. How to break these chains, to make better and more inclusive human-centred technology? By forgetting about the visuals all together to begin with and focus on our other senses to expand the possibilities of technologies like VR and AR into ‘never seen/felt before’ experiences. To really explore the affordances the technology gives us, to exploit them fully. Then, we can choose when visuals are the best way to interact or whether a different modality is preferable.
If we’re bringing our digital and physical realities together (VR at one end of the spectrum, AR/MR in the middle and IoT at the other end of the spectrum), and by the near future they’ll be combined into one UX Ecosystem - which happens in the same space and time as our physical reality, we will need more natural ways to interact with the technology than we currently do, and not by adding more devices to “feel” the technology more, but by reducing them and using our real objects, spaces and bodies as inputs and triggers for the UX Ecosystems and digital products of the future.