“In the age of telepathic machines you’ll need a School of Thought…”
Last night I was falling asleep and my brain decided to kick into ‘funny mode’, writing the best comedy of my life. But I was too lazy to wake up and write it down…
And I am fairly sure it was hilarious – probably my best work, ever.
Imagine if there was a way I could simply record all of thoughts, with all their detail intact, and share them with the world…
Well, in the not so distant future consumer technology may well allow just that dream to happen, and it will probably end up as a mechanism for the early diagnosing of cancer first.
QUICK HISTORY OF COMMUNICATION IN THE PAST 30 YEARS
As someone who grew up in the 1980’s, I’ve seen communication devices develop as follows…
Fixed phones in home and office
Mobile phones (emerging early 90s)
Email (mid 90’s)
Text (SMS appearing in mainstream use in early 2000s)
MSN messenger type systems – Skype, ICQ etc (early 2000s for one-on-one)
Video Calls (e.g. Skype since late 90s, and then Hangouts from 2011)
And Email (again) remaining as dominant force for many – with early adopters using systems like Slack to move people into a ‘mixed messaging’ approach
As you can see there are a mix of modes of communication:
Audio and Video
And then we have Social Media, which can be seen now as a mix of all forms of the above – text, images, video (live and recorded) – and which is mainly ‘one-to-many’.
So imagine you were able to skip all the expressions of the thoughts you have, and jump straight to sharing the thoughts themselves…
When the head of the social media giant Facebook says the following, you have to prick up your ears a little:
“You’re going to just be able to capture a thought, what you’re thinking or feeling in kind of its ideal and perfect form in your head, and be able to share that with the world in a format where they can get that,” Mark Zuckerberg
He does say it could be decades away, but things are moving quickly and there is one person you need to watch if you want to really understand this space.
This is what Peter Gabriel says about Mary Lou Jepsen’s work:
“She has uses a brain scanner and plays a lot of images to someone and can then tell the computer to match every frame of video with a particular brain pattern. Then she watches the brain patterns and tells the computer to bring up the appropriate video image, so what she’s now doing is turning thoughts into video.”
As we move toward increased digitization, we have the potential for increased openness and transparency on one hand (e.g. exporting thought), and the potential erosion of privacy on the other.
Openness and transparency is what we so often seek from others, but privacy is what we hold onto for ourselves.
The idea of ‘Cortical Modems’ is where you offload memories into the cloud, and be “empathetic in a way that is different and more powerful…” as we access perspectives of events from multiple viewpoints.
If you are into NLP, or meditation, you are well familiar with the structure of internal representations of experience – often coding them in terms of ‘submodalities’ i.e.
visual, auditory, kinesthetic, but imagine being able to ‘see’ the brain activity that happens when a thought arises using a platform like Glass Brain.
I’ve been talking about Mary Lou Jepsen for over 7 years now and as you’ll see below (well worth a watch) there are suggestions that things may not be quite as far away as Zuckerberg was suggesting.
She used to work at Facebook, but is clear to stress (twice) that this is not her day job:
When you combine this with Zuckerberg’s vision, you begin to see how this could happen. Mary Lou Jepsen has the ability to take consumer devices to market, mass market, and that video shows the potential size of a unit we could wear.
For anyone interested in communication, and with the view of tracking back behaviour to individual thoughts in particular, it is an exciting space to be in.
But what if we didn’t even need to wear anything? At least not ‘on the outside’.
Well, that is being considered too…
In the past few years we have taken a huge step toward a Star Trek future where we talk to the computer, which knows our intent based upon historic interactions.
My buddy David Amerland writes extensively on this, and I recall one conversation where we discussed how we are already here:
So what happens when we go beyond this?
Well, there is talk of having the computers within us, and one such device would be a neural lace.
A neural lace is a ‘wireless brain-computer interface’ connected to our brains, and would essentially connect the wearer to the Cloud.
We would become a computer. Crazy stuff, I know. Yet it is being discussed by the people with either the nous and/or the money to make it happen.
It is almost unimaginable to consider the change in our consciousness where we can access any information through thought, bypassing the need to ‘talk to the phone’ as a way to retrieve information.
But when you continue the current trajectory of this article, you can see an approach (at least conceptually) where you could also output thought, and communicate into the Cloud – just as we do with Social Media now.
One of the reasons I am extensively exploring Virtual Reality is the following:
Facebook say that Virtual Reality/Augmented Reality is part of their 10 year road map.
Being able to connect with anyone, anywhere in these spaces is like having a holodeck, or a teleportation machine. When it comes to communication, which is my main focus, you could easily see how a headset reading brainwaves could (at least conceptually) be added to the ‘visual/audio display’ we already have with VT headsets like Oculus.
But if we are getting ‘plugged in’, what would it mean for ‘us’ and our expectations of being human? And how could it affect our relationships?
We’ve been here before, many times and it is all about…trust.
THINGS THAT INCREASE TRUST IN RELATIONSHIPS:
Openness and transparency
Communication about issues in a ‘timely fashion’
Trust through association (i.e. recommendations and friendship groups)
Evidence that your assessment of trustworthiness was ‘correct’, which in turn creates a positive feedback loop
So in large part, it is about being able to accurately assess ‘where a person is at’.
And just like Paul Ekman’s work on micro-expressions through the TV show ‘Lie to me’, the future will have ‘packets’ of information enabling for better, more effective communication in these new realms.
If people can ‘fake a thought’, there will be ‘tells’.
Look at Ekman’s insights, in essence people reveal their internal emotional state through their facial expressions, but he found there are universal (i.e. trans cultural) patterns of ‘flashes’ of expression that are so quick you may miss them if not watching carefully.
The series of facial patterns that can be codified into:
Disgust (think about drinking ‘snot’ to get this face!)
Contemp (snear, with the nose pulled up and the lips slightly lifted)
Sadness (it is often in ‘the eyes’)
Fear (breath in, eyes wide)
Anger (jaw clenched, between the eyes pulled down)
Surprise (eyes open)
‘Happy’ (eyes and corner of the lips turned up)
As I say, these expressions only last a fraction of a second. They flash on the face, and then disappear.
If they were e.g. over 1 second long then they may not be ‘real’ flashes of their internal state.
HOW DOES THIS RELATE TO THE DIGITAL WORLD?
When you look at software already freely available, you will see how Ekman’s work could become more used in a digital space.
With a camera at every turn, apps like this can show percentages of emotions people are experiencing, based on facial expressions.
I am excited about technology, you probably guessed that already. But it is communication that interests me more, with the realm of ‘thought’ being so central to our emotions and expression. This new realm will require a new approach, an unforeseen way of connecting what is ‘in’ with those outside our skins.