AI in Everyday life

On May 21st, more than 5,000 participants gathered at the European Convention Center Luxembourg to attend a new edition of ICT Spring. This year, Artificial Intelligence was one of the burning topics addressed by international experts. The first morning session was entitled "AI in Everyday Life".

Day 1 of the AI/Digital Summit was opened by Master of Ceremonies Jean Rognetta, Editorial Director of Forbes France. He opened his welcoming speech by declaring that it is “an honour and a pleasure to chair the first part of the summit” and went on to say that as a journalist he wanted to “talk about the impact of AI on my everyday life … which is almost nothing” saying that he “represents an industry that is directly threatened by AI”.

He then went on to talk about divergence, or the lack of it, and complained that increasingly the tech world is now split between the USA and China. He quoted the statistic that there have now been over 200 Unicorns launched in the USA, 150 in China 

Mr Rognetta also spoke about the study by MMC Ventures that suggests that as many as 40% of European AI companies do not actually have AI in their software, and speculated that it would not be much different in the USA.

In closing he pointed out that although AI is now everywhere “except press and journalism”, it is most ubiquitous in marketing (23%) then customer service and IT, each with 16% market share, and before handing over to the next speaker, Bruno Zamborlin commented that the biggest single use of AI is in “Chat Bots” and wryly observed that “Artificial they may be … intelligent they certainly are not!”


HyperSurfaces – Merging the Physical and the Data Worlds with edge AI

Bruno Zamborlin, founder of HyperSurfaces, started his presentation by saying that “We all live our daily lives spread across a physical world and the digital <<data>> world” and that these are “two parallel universes, connected only through little wormholes like touch screens and smartphones”. He then told the audience that he “dreams of completely merging these two universes”.

His vision is of a world of intelligent materials, where eEvery object of any shape or size can become data enabled via its surface … Glass, wood, plastic, panel, steering wheel” can all understand physical interactions between physical (people) and the object

He developed “edge AI” as the technology that allows it to work. Chips are embedded in objects and sensor data is used to record events and process in real time (< 20ms latency) to understand the events.

Mr Zamborlin then showed a video demonstration of a “hyper car door” in which 3 vibration sensors, costing just a few dollars, can detect more than 35 different events when various parts of the door are touched, opened, closed and pointed out that the technology can equally be used for smart homes, smart security, smart shops etc. as the end user defines the event sthat they want to be detected. All of this takes place without WiFi so that the data remains private, and he believes that HyperSurface is the first data company for physical interaction data.


Neurosciences x AI = Superpowers

There followed a very entertaining presentation by Professor Diana Derval, Chair and Research Director of DervalResearch, who stated that, before we can talk about artificial intelligence, we have to understand intelligence itself, and posed a simple audience participation question.

“You have a normal bear, a normal monkey, and a normal banana, you must put them in two groups”. It transpires that how we group these three items, and how quickly we group them, demonstrates different types of intelligence and thought processes … and that these different kinds of thought processes have a parallel in the AI world that favour different kinds of AI situations … expert systems, pattern recognition and so on.


Professor Derval related the story of the first autonomous car … that confused a garbage bag for a pedestrian … both of them complex irregular shapes, and suggested that the problem was that the AI system was trying to emulate human thought, when other senses that many animals have … night vision, infra-red may be more appropriate. Why emulate pattern recognition when all you need is a heat sensor?

She concluded that the advanced technology in cars may make us think that they are bringing us superpowers, but that this Is quite illusory. As she concluded, “when you develop an AI system, who is the target customer? Different applications need different styles … neural science can help define patterns … the natural world can provide us with other intelligence cues, and perhaps realistically we should strive for Enhanced, not Artificial Intelligence”


Emotion AI for the better relationship between human and machine

Hazumu Yamazaki, CSO of Empath, started by telling the audience the humorous anecdote of how, as a philiosophy and literature student, he had never thought about starting an AI company, but then he met the co-founder in a bar and when he was drunk he signed a contract and has been stuck with it ever since! It has been quite succesful, and Empath won last year’s ICT Spring pitch competition, as well as a further 8 pitching events globally in 2018.

Empath technology recognizes emotion in voices, primarily joy, anger, calm and sorrow, in real time, and currently its main uses are in robotics and in call centres.

Looking at the case study of call centres, the two important ways in which it is helping are in training operators, and also in providing real time alerts to bring supervisors in to help their staff with customers who are starting to get frustrated, before they get angry

Yamazaki San raised some ethical questions that are prompted out of some more sinister request for the use of Empath technology “Can we use Empath as a lie detector, or to see if our partner is cheating?” He challenged AI companies to challenge themselves … to ask themselves to imagine the worst kind of dystopia that could come about from negative use of their products, and said that “We private companies developing AI should be honest enough to think about the dystopia ….”

He gave four standards that he feels AI companies should observe: Think about our own technology and ethics, open up discussion to the public, speculative design as a framework and an artist as a team member. And finally asked “Can you be brave enough to think of a dystopia that you can create?”

Artificial Intelligence for Good

Anita Huang, Project Manager, Perspicace, talked to the audience about her company’s WiFi motion and bio detector. She introduced the company motto “AI For Good” and was proud to talk about their relationship with Microsoft as a strategy partner. The technology works on the principle that, like radar, Wi-Fi generates a noise which can be disturbed by objects moving in it. So a person walking, jumping, falling, breathing all create their own pattern of disturbances which can be detected.

Obvious applications of this technology are in monitoring old people in their homes, where any fall, rapid or irregular breathing etc can be monitored and an alarm sent out. It is already widely used in nursing homes. It is also being used by emergency services for detecting people in fire or disaster zones, and for greatly enhancing evacuation efficiency, as well as in smart hotels for energy reduction based on peoples’ activities.

Several Chinese white goods manufacturers are also incorporating the technology in their appliances, and Mrs Huang closed by saying that traditional IR and motion devices normally have blind spots … their technology does not, and it is private.


A Round Table moderated by Jean Rognetta was organized and brought together Laurent Rapin (IoT Advisor, POST), Prof. Diana DervalHazumu YamazakiBruno Zamborlin, and Anita Huang. Jean Rognetta opened by saying that he wanted to summarize all that was said and to launch a debate. He joked that he was born into a world where all we had to fear was war … now we have to fear a car door! He asked each of the panel to envisage the dystopia that could arise from the use of their technology, and started by asking Bruno Zamborlin “What is HyperSurface dystopia?”

Mr. Zamborlin responded that it is impossible to stop research … but it is our duty is to start a debate about ethics – perhaps his dystopia is to launch products without talking first about the effects that they will have, and he feels that Europe should lead in this. GDPR for example could be a great place to start these discussions with large and small companies, state and individuals.  The tech companies’ role is to show what AI can do, and it is down to the regulators to discuss how far it can go.

Professor Derval thought that AI Is not inventing anything new, humanity has always had nosy neighbours … but now it’s Alexa. Some people are unethical and will try to use new technology in an unethical way, and that AI if wrongly used is just one more weapon.

Mr Rognetta then asked Anita Huang if she could imagine how Perspicace could be used as a technology for bad instead of good. She responded that “our technology monitors people. Some people could use it to detect what people actually are doing in a room, which could have privacy implications. We need to protect the data so that it’s not used in the wrong way.”

Laurent Rapin stated that the data safety is the first preoccupation of a telecom provider, it must be built into the. GDPR is to protect ourselves, and that POST retains all its data only in Luxembourg

The audience through several questions and observations, had a widespread belief that the AI infrastructure can easily be highjacked for commercial or political reasons, and that this is not a good thing.


"AI does not exist"

Luc Julia (VP Innovation & CTO Samsung Electronics) opened by remarking that “The AI that exists today is not the way that it is portrayed in Hollywood, I want to talk about the limits of AI”

He spoke initially about how AI first became a reality in the summer of 1956 in Dartmouth University in the USA with the mathematical modelling of a neuron … then a network of them … then a brain.

Mr Julia went on to say that the first mistake was calling it AI as it had nothing to do with intelligence. The first big realization of this came in 1961 when the early pioneers realised that they could not teach their network to understand natural language, and although through increased computer power greater feats of “intelligence” were demonstrated, they were not really intelligent. So when, in 1997 Bobby Kasparov was beaten by a machine in chessit was not through intelligence, but through algorithms … chess is rule based, with about 10**50 possible moves … the computer could store and analyse all of these moves and could therefore beat Kasparov because it had all the moves, not because it was intelligent.

In 2014 the Go world champion was beaten by the computer Deep Mine, which has a possible 10**762 moves. Deep Mine had 1500 CPUs and 300 GPUs… basically an entire data centre just to play Go, and burned an incredible 440 kw of electricity per hour while the human world champion was burning a mere 20 watts per hour. The machine had to be 20000 times more powerful to beat the human, mainly because the techniques that the machine is using are not the techniques that the human brain is using.

Another example of this lack of underlying intelligence is recognition of objects. To achieve near perfect recognition of “a cat”, a computer needs about 40000 example pictures, and still gets confused by a Picasso … a human brain needs just two images .. again, it is down to the difference between rules and intelligence.

In closing, AI is about what exists today … no creativity, no invention, … just rules and data and recognition. AI today with current techniques (mathematics and statistics) cannot innovate … AI is about following rules … innovation is about breaking rules.


John Chalmers

Photos: Marion Dessard

Share this: