Brain-Computer Interfaces

Every tool, including technology, can be used for good and evil, and some carry higher risks that can even outweigh the benefits. The technology at stake is one of them: Brain-Computer Interfaces. In this article, we will ask and try to answer some big questions related to that technology. So, should it be deemed an opportunity to transcend our limitations and reach our real potential? Or are we paving roads for a dark future?

Keywords: #Brain-Computer Interface, #BCI, #AI, #Technology, #LegalTech, #Neuroethics, #International Equity, #Regulations in BCI

Outlook

In 1964, Dr. Grey Walter used electrodes to monitor a patient’s brainwaves during surgery. According to the findings of Dr. Walter, the patient’s brain responded faster than himself to the tasks that Dr. Walter was asking. Such an event, monitoring brain activity via an external device, was happening for the first time in history, and that was definitely a turning point in neuroscience. Since then, many neuroscientists have been working in that field to “analyze brain signals in real-time to control external devices, communicate with others, facilitate rehabilitation or restore functions” [1]. Put differently, it is a “direct connection between living neuronal tissue and artificial devices that establishes a non-muscular communication pathway between a computer and a brain” [2]. In BCI technology, two different types of interfaces have been used “reader” and “writer” interfaces[3].

The reader interfaces are generally used by paralyzed or seriously disabled patients using robotic limbs or other prostheses. Through this interface, the communication between the brain and the prothesis or speech generators can be established. This process generally occurs within a five-step process. BCI first monitors the brain waves; then transfers them into the device which will record these waves; finally, they are translated to be performed by the selected device.

The writer interfaces, on the other hand, are generally used for the treatment of Parkinson’s disease, epilepsy, or severe (major) depression. In this interface, electrodes are implemented “to manipulate neural activity in specific regions and affect their function” [3]. 

Digitalizing Brain- Digitalizing Human

BCI technology is essential for the patients mentioned above so as to achieve everyday life experiences such as speaking, walking, holding things, or avoiding mental disorders or diseases. In other words, it is crucial that some patients only feel whole and alive with this technology. According to an experiment made by Frederic Gilbert, an ethicist, some patients establish a “radical symbiosis” with this technology since these people consider it a part of themselves [3].

“The question persists and indeed grows whether the computer will make it easier or harder for human beings to know who they really are, to identify their real problems, to respond more fully to beauty, to place adequate value on life, and to make their world safer than it now is.”

Norman Cousins, The Poet, and the Computer, (1966).

BCI, on the other hand, is also bearing severe risks. Apart from the legal or ethical discussions, the most concerning issue is user safety which involves severe health problems caused by the implant surgeries and the long-run neural effects of the devices. Furthermore, non-medical safety issues that may arise from device failures may put the patient into challenging situations.

For instance, BCI has been combined with other technologies such as artificial intelligence (AI) and machine learning (ML)(and adding blockchain and quantum technologies to this list seems quite possible within a few decades). While interacting with algorithms, things may get even more complicated, particularly regarding accountability, transparency, and liability. The black-box nature of AI, as well as inevitable algorithmic errors and biases, may lead to significant failures in algorithmic decision-making. This black-box occurs when we cannot precisely know why we have that output with these inputs. Put differently, employing AI and ML develops some patterns related to data inserted, and it becomes impossible for a human to track the causal chain between the input and output. While we are examining the liability derived from the use of AI, first, we need to analyze the extent and role of human consideration in this process.

In our daily lives, we frequently use AI tools to communicate, edit our texts, shop, or find online content that we will probably like. In most cases, AI only assists people, and human consideration plays the primary role. In other words, the decision is not made by algorithms but by humans- although the algorithms can manipulate them sometimes.

However, the question remains: How should we define the extent and role of human consideration when AI-based BCI is implemented in the human brain, which is the center of decision-making? The potential limitations of AI (at least in the current state of the art of the technology) make it highly possible to create unintended and even harmful outcomes for the user or third parties. For example, a patient with locked-in syndrome may harm another person with his artificial limbs or neuroprosthesis because of mistrained data or misinterpreted brain signals during signal processing or signal transduction. This situation requires a thorough analysis by considering multiple variables, values, and risk allocation.

 

Brain Computer Interface with Artificial Intelligence and Reinforcement Learning | by Jeff Coleman | Medium
An illustrative Scheme of how BCI technology works. Source: https://wp.ece.uw.edu/brl/neural-engineering/bci-security/

Now, we are actually coming to the point that causes a hot multi-disciplined debate between neuroscientists, neuroethics, legal experts, and government agencies. What can be sacrificed, and for the sake of what? In other words, the equation at stake has multiple variables, and to find them, we should ask (and try to answer) some questions.

How Much of Us, How Much of It?

“Humans are willing to (over-)rely on algorithmic support, yet averse to fully ceding their decision authority” [4]. No one can deny that there is (at least psychologically) a huge difference between “an ‘autonomous statistics-based computer system’ (replacing with a human doctor), or the doctor advised by the automated system”[4]. Now, if we are inserting a device that is boosted with AI and ML into our decision-making organ, then who is really deciding? And how much authority and autonomy will the person have?

Deep Brain Stimulation (DBS), a BCI approved in 1997 by the US Food and Drug Administration (FDA), has been used for the treatment of obsessive-compulsive disorders, Parkinson’s, and epilepsy and is being investigated for other uses [3]. According to some reports, some Parkinson’s patients became “hypersexual” or developed “other impulse control issues” after DBS [3]. Additionally, neuroethics discusses the difficulty of predicting and assessing other side effects due to the complex nature of the technology. Either directly or indirectly, the things we do (or do not do) differ when we “link” our brains to a device and when not. Therefore, it is pronounced that BCI (more or less) has the power to change our actions, personalities, or even impulses.

Overall, one thing is sure; we are inserting an artificial mechanism into an organ tightly related to our personhood and freedom. That means we are actually creating severe vulnerabilities to malicious attacks, misuses, and manipulations.  Although the risks are quite serious, we believe that the benefits may outweigh the risks. Broadly speaking, technology has a dual character where it creates positive and negative ramifications at the same time. When these negative ramifications cannot be eradicated or minimized by technical or regulatory designs, we tolerate those risks and challenges for the sake of a higher good or interest. In parallel to this, despite the potential hazards, BCIs give voice to someone who is unable to speak; enable one to use his/her prosthesis and provide motive power, or save someone from a mental disorder. Most importantly, for some patients, this technology is the only opportunity to overcome their misfortunes and have an improved life quality.  Comparing the pros and cons, we believe the benefits will heavily outweigh the “possible side effects.”

Stephen Hawking
 Stephen Hawking by MARCO GROB/WIRED UK

Commercializing Minds

Even though BCI technology initially targets individuals with certain diseases or disabilities, it may turn into a “must-have consumer gadget” in the age of metaverse[6]. In such a scenario, rather than a medical necessity, BCIs can be implemented for any (volunteer) individual who wants to maximize his/her virtual experience in video games, social networking, or any digital interaction. Currently, companies like Facebook, Neuralink, and Kernel have been operating and/or investing in neurotechnology, particularly in BCIs, and their research groups and labs are maintaining research on commercial uses of BCIs. However, despite the acknowledgments we made above for medical uses of BCIs, we are rather skeptical about non-medical use cases: will the results will be desirable for humanity, and will they serve the common good?

More importantly, cyber risks derived from this technology cannot be underestimated. As we got more involved in “cyberspace,” challenges regarding data security and privacy dramatically increased. So far, millions of internet users have suffered from various data breaches and leakages since the vast amount of personal and highly sensitive data are collected, copied (backed up), and stored digitally. Needless to say, data collected via BCIs constitute the most personal and sensitive data one can ever imagine, and any possible data breach will cause irrecoverable damages. In the widespread use of this technology, no security measures will be enough to maintain the security of such databases, particularly in the long run when sufficiently powerful quantum computers enter the stage. Most importantly, providing “digital access” to our brains may create a situation where this technology is misused by certain groups (e.g., terrorists) for their illicit activities like brain-washing or manipulating brain signals to control thoughts or behaviors. In very deed, a study examined the risks associated with these technologies and set forth how vulnerable BCIs are to certain neuro-crimes, like brain-hacking [7].

Exacerbating the Yawning Gap

The Digital Divide: Taken from https://blogs.imf.org/2020/12/02/how-artificial-intelligence-could-widen-the-gap-between-rich-and-poor-nations/

According to the UNCTAD Digital Economy Report 2019, we are witnessing a gap between “hyper-digitalized” and “under-connected” [8]. Though the developed part of the world has extensively become digital, the other side still has immense struggles due to a lack of basic needs such as food, clean water, or primary education. Needless to say, the amount of contribution and access to this technology will drastically differ from developed countries to least developed ones. Accordingly, commercializing this technology for arbitrary use cases (which also include enhancing human capabilities) will inevitably cause a humanitarian catastrophe since the yawning gap will widen further.

BCI technology, without any doubt, promises to move beyond the limitations of being human and makes possible the creation of “übermensch.” Apart from the ethical or philosophical debates around it, if this technology will be ever used for non-medical cases, then equal access must be ensured.  Otherwise, people who were born in least developed countries will “suffer a citizen penalty” since:

“(p)eople feel that no matter how hard they try they cannot increase their general standard of living in a country that is growing slowly – and that the only way to close the income gap is to move to a country with a higher average income” [9]

In other words, this immense global inequality, at some point, will become reverseless and lead to a “migration pressure,” which will make the situation even worse due to the brain drain in the least developed countries[9]. More importantly, creating such an extensive difference between human skills and capabilities, will inevitably trigger the “classification” of humans, which is against human dignity and the principle of equality and violates Article 1 of the Universal Declaration of Human Rights.

Regulatory Aspects

Most of the regulations regarding BCI technology aim to establish safety, quality, and performance requirements specific to the medical use cases (like EU Medical Devices Regulation (MDR) or FDA Guidance (non-binding) on Implanted Brain-Computer Interface (BCI) Devices for Patients with Paralysis or Amputation – Non-clinical Testing and Clinical Considerations). However, there is no BCI regulation for non-medical use of BCI technology yet. We need to form international standards and a regulatory framework to prevent the aforementioned challenges.

It is often useful to be skeptical when exploring new technologies, especially to detect and prevent possible challenges and drawbacks at the very beginning. Nevertheless, these challenges should not be considered absolute impediments to employ technology since they may be fixed with further research and development. As discussed, non-medical implications of BCI technology pose severe perils and require more careful consideration to make wise choices and find the regulatory balance.

Like every tool, technology can be used for good and evil, and BCI technology is not an exception. The outcome will depend on: (i) how this technology is designed; (ii) how this technology is regulated. In fact, both the design and the regulation (should) aim to maximizw positive ramifications while minimizing negative ones; however, there are many instances that removing negative ramifications costs eliminating positive ones as well since they derive from the same feature of the technology [10]. In such cases, technologists, and especially regulators, should evaluate these negative and positive aspects of the technology by considering “a ratio between (i) eliminating (partially or totally) the negative ramifications and (ii) maintaining (partially or totally) the positive ramifications of the technology (“EM ratio”) [10].

To sum up, analyzing the implications and effects of BCI technology is multi-dimensional and requires a thorough examination considering different use cases. It is crystal clear that BCI technology can play a vital role in improving the life quality of certain patients and disabled individuals and maximizing the virtual experience of internet users. Nonetheless, this technology is unprecedently complex and poses significant health and security threats.

It is still early to choose a stance since (i) we know very little about future developments of BCI and (ii) how it will interact with other emerging technologies, such as AI, blockchain, or quantum technologies. If certain improvements, security measures, and socio-ethical standards can be guaranteed, we can cut across our current limitations and reach the peak of our cognitive and intellectual evolution.


[1]Christoph Guger, Brendan Z. Allison, Günter Edlinger, State of the Art in BCI Research: BCI Award 2011, in Brain–Computer Interface Research: A State-of-the-Art Summary, 2013.

[2] Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM., Braincomputer Interfaces for Communication and Control. Clin Neurophysiol. 113(6), 2002.

[3] Liam Drew, Agency and the Algorithm, Nature 571, 2019.

[4] Marina Chugunova, Daniela Sele, We and It: An Interdisciplinary Review of the Experimental  Evidence on How Humans Interact with Machines, Journal of Behavioral and Experimental Economics 99, (2022). 

[5] Yochanan Bigman, Kurt Gray, People Are Averse to Machines Making Moral Decisions, Cognition. 181, 2018.

[6] Alexandre Gonfalonieri, A Beginner’s Guide to Brain-Computer Interface and Convolutional Neural Networks, Towards Data Science, 2018.

[7] Marcello Ienca, Pim Haselager, Hacking the Brain: Brain–Computer Interfacing Technology and the Ethics of Neurosecurity, Ethics and Information Technology 18, 2016.

[8] UNCTAD, Digital Economy Report 2019-Value Creation and Capture: Implications for Developing Countries, 2019.

[9] UNCTAD, Technology and Innovation Report 2021- Catching Technological Waves: Innovation with Equity, 2021.

[10] Thibault Schrepel, Law + Technology, Stanford University CodeX Research Paper Series, 2022.

Published by Ece Su Ustun

Lawyer & Researcher Bilkent University (B.A) & University of California, Berkeley (LL.M)

Leave a comment

Leave a Reply

%d bloggers like this: