Smart buildings, which are central to the concept of smart cities, are a new generation of buildings in which technological devices, such as sensors, are embedded in the structure of the buildings themselves. Smart buildings promise to personalize the experiences of their occupants by using real-time feedback mechanisms and forward-looking management of interactions between humans and the built environment.
This personalization includes continuous monitoring of the activities of occupants and the use of sophisticated profiling models. While these issues spark concerns about privacy, this is a matter of not seeing the forest for the trees. The questions raised by the massive arrival of digital technologies in our living spaces go far beyond this.
As a professor of real estate at ESG-UQAM, I specialize in innovations applied to the real estate sector. My research focuses on smart commercial buildings, for which I am developing a conceptual framework and innovative tools to enable in-depth analysis in the context of smart cities.
Read more: Get ready for the invasion of smart building technologies following COVID-19
“Choices” proposed, or imposed
Thanks to ubiquitous computing, interactions between building occupants and nested technology are quiet and invisible. As a result, the occupants’ attention is never drawn to the massive presence of computers operating permanently in the background.
Personalization allows us, for example, to have the ideal temperature and brightness in our workspace at all times. This would be idyllic if this personalization did not come at a cost to the occupants, namely their freedom of action and, more fundamentally, their free will.
As technology increasingly mediates our experiences in the built environment, choices will be offered to us, or even imposed on us, based on the profile the building’s technology device models have created of us in function of the goals, mercantile or otherwise, of those who control them (such as technology companies).
Having the ability to decide either to do something or not, and to act accordingly, is a basic definition of freedom. Smart buildings challenge this freedom by interfering with our ability to act, and more fundamentally, with our ability to decide for ourselves. Is freedom of action even possible for the occupants of a building where interactions between humans and their built environment are produced using algorithms that are never neutral?
Satisfied… but not free
The 17th-century English philosopher John Locke’s famous analogy of the locked room sheds light on this question. Suppose a sleeping man is transported to a room where, upon awakening, he is engaged in activities that bring him great satisfaction, such as chatting with a long-lost friend.
Unbeknown to him, the door of the room is locked. Thus, he cannot leave the room if he wants to. He is therefore not free, even though he voluntarily remains in the room and gets extreme satisfaction from what he is doing there.
Locke’s analysis reflects the situation of smart building occupants. They benefit from the personalization of their experiences from which they derive great satisfaction. However, once they enter a space, technology controls their interactions outside of their awareness. While they may want to stay in the building to enjoy personalized experiences, they are not free. Smart buildings are a high-tech version of Locke’s locked room.
There’s nothing new about the problem. Already in the 19th century, in Notes from the Underground the Russian Fyodor Dostoyevsky identifies the challenges that computational logic poses to free will.
You will scream at me … that no one is touching my free will, that all they are concerned with is that my will should of itself, of its own free will, coincide with my own normal interests, with the laws of nature and arithmetic. Good heavens, gentlemen, what sort of free will is left when we come to tabulation and arithmetic…?
Deciding on the role of technology in our living spaces
Indeed, what can be said about our free will when choices are made for us by technology?
An action is something we do actively, as opposed to things that happen to us in a passive way. Also, the active will to perform an action differs from the passive desire for an act to be done.
While algorithms are concerned with the predictability of human behaviour, things happen passively to the occupants of smart buildings. Their role is limited to receiving stimuli whilst the invisibility of the technology maintains their illusion that they have sole control over their actions.
These human-built environment interactions erode our will to take action, replacing it with desires shaped and calibrated by models over which we have no control. By denying the free will of their occupants, smart buildings challenge the right to action that the German philosopher Hannah Arendt defines as one of the most fundamental rights of humans, the one that differentiates us from animals.
So, should we prohibit, or at least regulate, the technology embedded in smart buildings?
The answer to this question takes us back to the very origins of Western democracy. Long before the Big Tech companies (GAFAM), the Greek Socrates (who died in 399 BC) was concerned with the nature of an ideal city. In Plato’s The Republic, Socrates explains that the difference between a city where citizens have all the luxuries and a city without luxuries, which he calls “a city fit for pigs,” is the ability of the residents of the former to choose their way of life, unlike the residents of the latter where this choice is simply not possible.
Smart cities are the digital version of the luxury cities of antiquity. However, without granting their residents the ability to make informed choices about technology, they provide satisfaction at the expense of their rights.
To avoid building an entire environment according to the philosophy of pigs, smart building occupants should retain the legally defined right to decide for themselves the role of technology in their living spaces. Only then can their freedom be respected.
Patrick Lecomte does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
This article was originally published on The Conversation. Read the original article.