Friday, September 11, 2015

Wearables and Predictive agents. Towards better behaving, cognitive-offloaded humans

“A general “law of least effort” applies to cognitive […] exertion. The law asserts that if there are several ways of achieving the same goal, people will eventually gravitate to the least demanding course of action. In the economy of action, effort is a cost, and the acquisition of skill is driven by the balance of benefits and costs. Laziness is built deep into our nature.”
Daniel Kahneman, Thinking, Fast and Slow

Thanks to the work of Daniel Kahneman and others, we now understand our cognitive processes as being divided into two systems. System 1 produces the fast, intuitive reactions and instantaneous decisions that govern most of our lives. System 2 is the deliberate type of thinking involved in focus, deliberation, reasoning or analysis – such as calculating a complex math problem, exercising self-control, or performing a demanding physical task. [DK]

"People who are cognitively busy are also more likely to make selfish choices, use sexist language, and make superficial judgments in social situations. Memorizing and repeating digits loosens the hold of System 2 on behavior” [DK]




Developments in sensor-enabled wearable technologies bring about fundamental shifts in interactions paradigms and new capabilities for real-time, contextual-aware systems. In the near future new possibilities to create digital augmentation of reality will emerge. By having the input of sight, hearing and vital signals overlaid with mobile services, AI and machine learning, both in-situ and preemptive access to information can be provided. We will get cognitive assistance. It has the power to significantly change the way we live [and improve our behavior and self-control – theoretically at least]

Level 1: Helpful Guidance

Situational awareness and voice control lay the foundation for a “second brain” as users relay on devices to listen for their (sometimes unspoken) requests and offer suggestions triggered by real-time observations and user history.

Phase 1: Digital Experiences (mainstream)

In its digital form, solutions have been available for years and are today mass-market consumer experiences.

GPS Navigation Systems guide us step-by-step to the destination offering just-in-time voice guidance. A difficult cognitive task (navigating an unfamiliar territory) has been transformed into a trivial exercise of following directions. Cognitive effort is minimized and we would not choose in any normal circumstances the paper-map over the GPS navigation system.

DriveSafe.ly is one mobile application example of a system that offers in-situ helpful guidance. It reads text messages and emails aloud in real time and automatically responds without drivers touching their mobile phones

Phase 2: Physical-Digital Hybrid Experience (emerging)

The combination of sensor-enabled wearables with the cloud-computing services creates the possibility of building real-time, truly personal contextual experiences.

The GPS navigation system or the DriveSafe.ly would issue their recommendations and guidance purely based on specific, system internal triggers (i.e. a missed turn on the road, a new text message) and regardless if you are in the right personal context to receive the guidance.

For example, if your baby cries loudly in the back seat when the message arrives and you can’t really interact by voice/audio – the voice interaction interface might not be useful even for “declining” to be read aloud the message, so would be suitable if the system will be able to detect the level of high noise you are hearing now and start a conversation with you only at the right moment (i.e. when you can actually hear the message)

Furthermore by knowing the condition of the driver through a variety of sensors (heart rate, for example), it’s easier for the car to understand the driver’s state. The vehicle could change a song to relax the driver, for example. Ford is already seeking to use the technologies to better monitor driver health.

Another evolution powered by the wearables relates to shifts in the user interaction paradigms toward more natural interactions – better experience of voice interactions with the hearables, better visualization opportunities with glasses, better gestures interface with wrist/ring wearables. Ultimately this can take the driving toward a hands-free, immersive experience.

Starting to take into use these opportunities, Mercedes Benz has developed the Glassware project, which is designed to work seamlessly with a car’s navigation system. Route directions are overlaid via glasses onto the road, allowing drivers to keep their eyes away from a GPS screen.

The automotive environment is just an example of how the integration of physical with digital can actually improve the experience and lift the cognitive effort in certain situations, and hence allowing us to focus better in what matters. 


Level 2: Predictive Assistance

Smart devices will increasingly function as smart assistants to users, anticipating what information they need based on past behaviors, current location, environmental conditions and detailed information about their physical (vital signs) and digital (calendar, contacts) state. Purveyors of these systems could market them with promises such as "minimizing cognitive lift and give you more time to focus"


Phase 1: Digital Experience (emerging)

Cortana and Google Now are intelligent personal assistants, augmented with predictive search capabilities. At the beginning they were able to guess what notifications and apps would be most useful for you. But their capabilities are increasing, powered by machine learning. The systems are designed to learn user’s behavior, find patterns in the activity, expand the insights from different other applications and input sources and continue to refine their capabilities in predictive recommendations.

Osito is a predictive intelligence application for smart phones that helps you get where you need to be by merging traffic information, weather, flight details and your calendar.

Phase 2: Physical-Digital Hybrid Experience (early phases)

Combine an intelligent personal assistant like Cortana or Google Now with a wearable, sensor-based system and the result can be quite powerful.

The cognition-assist presentation devices (in form of eyeglasses of hearables for example) can use own sensors, add the information from the other connected devices (e.g. the smartphones, car, wrist bands), apply analytics and predictive algorithms and alert us in real time with relevant information, taking in consideration our full sensory circumstances, digital presence and activity history.

Too much cognitive load, especially on long term, even if coming from small details & tasks - it gets us tiered and has implications on our well-being. “What do I need to buy from the grocery store, when do I have my son’s school appointment, do I have time to go to the bank between the other two meetings, what bus shall I take, what was the thing I had to ask from my daughter’s teacher, what was that substance I recently heard as being harmful and should check if it is in the margarine I buy, where did I see that face before, where do I need to turn” - they are all basically simple things, that we need to remember and "solve" constantly, and we need to have the correct information available easily, at the right time.
A cognitive-supporting system “knows” what my next question is and will give me the information ever before I am asking for it.

For example, a pedestrian-enabled navigation hearable could direct me more accurately on the route by knowing in which direction I am turning my head/in which direction I am looking.

Or, imagine this scenario: the eyeglasses I am wearing recognize the face of the person is approaching me as being the new CEO of a company that is our customer (I have meet him once, exchanged business cards, but I have lousy face recognition memory) The invisible earpiece whispers in my ear CEO’s name and reminds me that I have a meeting scheduled with his team next week. I get a hint to remember the hot topic and I can now quickly touch on the subject as we are together in the elevator. I just got memory assistance (face, meeting, and topic) the connection made with a future event and the guidance on how I shall act in order to get better results in a future situation. It significantly released my cognitive effort and improved my chances of success with the customer.


---

References
http://electronicsmaker.com/automotive-intersecting-with-wearable-technology
http://www.cs.cmu.edu/~satya/docdir/ha-mobisys2014.pdf
http://www.ibmbigdatahub.com/blog/cognitive-computing-wearable-prosthetic
http://bigthink.com/delancey-place/the-two-systems-of-cognitive-processes
Thinking Fast and Slow. Daniel Kahneman. Publisher: Farrar, Straus, and GirouxDate.

No comments:

Post a Comment