ICT Implants, nanotechnology, and some reasons for caution

ICT Implants, nanotechnology, and some reasons for caution

By Y. J. Erden, Research Fellow in Ethics and Emerging Technologies, CBET, SMUC

Artificial implants can serve important medical functions in humans and, at least historically, have tended to be passive medical devices. Artificial valves and joints are some of the more common examples. This area is expanding however, with the application of Information and Communication Technologies (ICT), as well as nanotechnologies, in the development of more sophisticated, primarily active, medical implantable devices (Nsanze, 2005, pp. 119–120). There are a number of significant ethical and philosophical issues arising from the latter category in particular, many of which require immediate attention. 

Brain prosthesis

One example that brings to the fore key conceptual and ethical issues is the recent development of an ‘artificial hippocampus’. A vital region of the mammalian brain, responsible for consolidating information, the hippocampus is often an early region to suffer damage from Alzheimer’s Disease. The brain prosthesis would supposedly mimic hippocampus function, rather than simply stimulate brain activity, and as such the ‘silicon chip implant will perform the same processes as the damaged part of the brain it is replacing’ (Graham–Rowe, 2003). Whether the chip can achieve this aim remains to be seen, but in 2009 the research team leading this innovation, led by Theodore Berger, was awarded a 4–year $16.4 million DARPA grant to further their research into restoring lost memory function. This follows a hefty $24 million investment into similar research on Brain Computer Interface (BCI) programs, split between six different laboratories (Nsanze, 2005, p. 145). In financial terms the potential to repair or enhance the brain is being taken seriously.

Yet a number of important questions arise here. For example, is the artificial hippocampus concept dependent on a mechanistic and reductionist notion of mind/brain, and what are the implications of this? What kinds of scientific models are available here, are they adequate, and what are the limitations of such modelling? How are traditional boundaries between health and sickness being challenged? Is it the case that people ought never to accept certain defects or illness, old age and death, and if not, what are the limits and what kind of limits are they? And these are only a sample of questions arising (more will be considered below).

In fact, perceptions of the person as a ‘work in progress’, something to be fixed or upgraded permeate our popular culture as well as our approach to health and well–being. With the ideals and visions of leading–edge biomedical science and technology, there is a shift of questionable sustainability in our corporeal identity. Traditional Western views of the human person as composite (mind, brain, body, soul) are already being challenged by scientific research. Empirical neuroscientific evidence, for example, offers credence to determinist accounts of the mind. In fact, challenges to human identity are likely to deepen as scientific advances lead to more sophisticated technologies. If technology becomes more invasive and yet more common within industrial cultures it is likely to influence both concepts as well as conceptions of human identity substantially.

Mood control

Some other pertinent developments in implant technology are targeted at mood control. These include implantable neurostimulation devices that can modify electrical nerve activity and in this way be used for the treatment of severe depression. Another relies on input–output interactions or BCI, which alongside neurofeedback, could allow user/computer interface through electrical impulses (cf. Soekadar, 2008). These mechanisms might also be used in future for the management of depression. While still in its infancy, these sorts of developments seem likely within the short to medium term. For example, in October 2003, neurobiologists led by Miguel Nocolesis reported success in teaching rhesus monkeys to consciously control a robot arm using their brain and visual feedback via ‘a closed–loop brain–machine interface’ (Carmena, 2003, p. 193).

Innovations of this kind raise all sorts of ethical and conceptual issues, not least with regard to what counts as severe in terms of depression, and the methods for diagnosis. Symptoms similar to severe depression can of course be displayed in other circumstances (bereavement is one such example); yet, distinguishing between these two states is a problematic task. This is particularly because for diagnosis to be effective it requires, amongst other things, understanding and coherence on the part of the patient. Other questions about the nature, and (potentially) even the value of depression (why it may occur, what it does, how it feels to live with it) must not be ignored, particularly if the treatment is irreversible. Despite this, research into the possible impact of these technological developments for the nature of human identity is limited. Yet it cannot be denied that our identity is formed by a multitude of experiences, emotions, memories and so on. This includes those experiences or emotions that may cause us pain, such as depression or bereavement. While I would not wish to claim that those who are suffering ought to continue to do so, I remain sceptical of those who would simplify such conditions to the purely medical, and develop technologies accordingly. Particularly when their role in our lives and formation of identity remains so complex and uncertain. As Fiedeler and Krings (2006, p. 1) rightly point out, there is sometimes ‘a technological optimism’, which is problematic, not least because of the tendency of some research to view the brain as ‘just a complex but physico–chemical determined machine’.

Research that does consider ethical issues arising from implant technologies typically focuses on the impact of external ICT on the nature of identity (the internet, gaming, data mining), or from human enhancement. Much of the literature comes from within the sciences, and focus is often on related topics of human dignity, health, or sociological issues. While the question of identity feeds from and into these related concerns, it should not be subsumed by them. For instance, and as touched upon above, at what point, or by what criteria, in the use of such implants is the human identity of a person challenged? This embraces the related question of the distinction between repair and enhancement. What, if any, distinction can be drawn between using ICT implants for repair or restoration of ‘normality’ (whatever that may be) and using them for the enhancement or “upgrading” of human powers and abilities? What does this tell us about our view and beliefs about human worth, identity and possible futures?

Enhancing memory

To understand the ramifications of this question let us return to the example of an artificial hippocampus. Since the hippocampus is key in the formation of new memories, this artificial prosthesis could be used to both restore and enhance memory. What if the hippocampus replacement leads to an improved capability for producing more accurate memories? Does this amount to repair or enhancement? Would more accurate, more complete, or even just more memory actually be an improvement at all? Does this take into account a broader idea of the value in either scale or quality of those memories we typically form, whether consciously chosen or otherwise? Would an artificial hippocampus negate this potential for choice altogether? Again, my point here is not to say that the technology is flawed, but only that questions still need to be asked, and to query how often they are. For example, in a discussion regarding what sort of enhancements we might expect from converging technologies, Burger (2002: pp. 167–168) lists ‘better’ senses, memory and imagination. All of which, as the above shows, are contentious and require further consideration, not least because the idea of better presupposes current boundaries to be somehow insufficient or limiting. What would it mean for our imagination to be better? And what might that mean for how we live our everyday lives? In areas like nanotechnology and ICT, presuppositions about our identity inform what sort of contributions these technologies can make to our lives and well–being. Progress is presumed based upon what may in fact turn out to be significant misunderstandings regarding the complexity of identities and identity–formation.

In fact, it seems very likely that our memories play an important role in this process, and it is clear that we do not (perhaps cannot) always understand the ways in which this occurs. Before we take for granted the benefits of technologically advanced implants for these purposes, we need to be clear about what this might mean for us. This is by no means to say that these developments might not be valuable. Indeed, there is much evidence to the contrary. Those affected (directly or indirectly) by neurological disorders such as Alzheimer’s disease and illnesses like depression may welcome these developments, and there is perhaps much to welcome. My point is only that we ought to consider the broader effects of these developments, and to proceed with caution.

Convergence and ICT implants

Alongside these developments are issues arising from the convergence of nanotechnology and ICT for the production of ICT implants. In broad terms it is clear that ‘technology miniaturisation trends, such as smaller sizes, lower power consumption and increased performance’ (Strydis and Gaydadjiev, 2008, p. 3186) will substantially affect the structure of implants over the coming years. How the developing ability to manipulate matter at the nanoscale will affect specific developments of ICT implants is, however, still uncertain.  Nanoscale devices, with at least one dimension less than one–tenth of the approximate diameter of a red blood cell, are now being developed, and some of these will probably have biomedical and neurological applications. For example, an engineered nano–fibre and a human neurone may be brought into functional contact. Nanomaterials, nanoelectronics, nano–computing, tissue engineering, as well as nano–enabled production processes, may be converging in ways that raise philosophical, ethical, social and regulatory issues for which we are simply not prepared.

While some believe the converging of these sorts of technologies are likely to exacerbate existent risks, such that changes are likely to be ‘evolutionary, not revolutionary’ (Kosta & Bowman, 2011, p. 257), others suggest these will be fundamental. Fiedeler & Krings (2006, p. 5) propose that the ‘further penetration of technology into societal and cultural processes and vice versa is considered as a deep transformation process’. Either way, it is clear that when it comes to nanotechnology we are dealing with some pretty important uncertainties, and that more than one kind of uncertainty is involved (Hunt & Riediker, 2011). This includes a lack of data, as might be expected considering its newness, but also more intrinsic uncertainties that result as a consequence of the complexity of living systems in their responses to nanoscale entities (Hunt & Riediker, 2011). The trick is to account for this uncertainty in advance, and offer adequate legislation for future possible unknown, even potentially unpredictable risks? For example, what might be the potential harm from long nanofibres once in contact with bio–organisms? The impact on bio–organisms of asbestos is just one example that shows this sort of question needs to be taken seriously.

In the field of bio–medical engineering, nanomaterials such as carbon nanotubes can be used for the bio–nano neural interface. Yet the possible health implications of such implants are uncertain. One may raise questions about possible damage to DNA (even inter–generational), the immune and hormone system, protein–folding and generally about biocompatibility and bioaccumulation. The potential for so–called ‘nanomachines’, and the possible control of such machines using swarm intelligence is also currently being investigated (cf. Eberhart and Shi, 2001). Nanotechnology offers the potential to break Moore’s Law, and with the speed of such developments comes the risk that we will not have time to develop strategies, nor gather knowledge about associated risks and assess potential harms in relation to benefits.

A threat to human dignity and democratic society?

The EGE report (2005, p. 2) states: ‘In its Opinion, the EGE makes the general point that non–medical applications of ICT implants are a potential threat to human dignity and democratic society’. In this account, dignity ‘is used both to convey the need for absolutely respecting an individual’s autonomy and rights and to support the claim to controlling individuals and their behavior for the sake of values that someone plans to impose on other individuals’ (EGE, 2005, p. 16). They further add that human dignity ‘concerns the self as an embodied self’ (EGE, 2005, p. 28). As such, this is an area requiring regulation, they claim, since at that point (in 2005), and until now, ‘non–medical ICT implants in the human body are not explicitly covered by existing legislation’ (p. 2), despite which, ICT implants may, in the future, lead to the transformation of the human race’ (p.28). Perceptions of the human body as data, as opposed to complex social, cultural and natural beings, with the potential for transformation, has, they claim, ‘large cultural effects’ (p. 27):

particularly as it precludes higher level phenomena such as human psyche and human language or conceives them mainly under the perspective of its digitization, giving rise to reductionism that oversimplifies the complex relations between the human body, language and imagination.

Those who would promote the benefits of technological advances might accept the need to minimise risks (ethical, social, legal, human, environmental) while being wary of ‘compromising the development of a promising and powerful technology such as nanotechnologies’ (Kosta and Bowman, 2011, p. 270). A further difficulty in achieving this delicate balance is in answering the question: What constitutes risk, hazard or harm? As Wickson et al (2010, p. 7) explain, ‘While everyone may agree that scientists, policy makers and citizens should work to ensure that nanotechnology does not harm “nature” or “the environment”, there are very different ideas about what these concepts mean, what constitutes harm, and the reasons why we might wish to avoid it’. Optimists like Kosta and Bowman (2011, p. 271) suggest that existing legislation, along with ‘engineering based solutions’ should suffice so long as researchers and manufacturers can be encouraged to consider integrating precautionary systems within their designs during the early stages of the products’ development. Though they were talking specifically about privacy issues arising from the use of ICT implants, this sort of optimism is not uncommon. Indeed, there are many other voices that question whether developers need to think of these issues at all. Legislation and ethical consideration should come later, they cry. The difficulty with such claims is that, as McDonough and Braungar (2002, p. 26) astutely reflect, ‘At its deepest foundation, the industrial infrastructure we have today is linear: it is focused on making a product and getting it to a consumer quickly and cheaply without considering much else’. We simply cannot ignore the commercial aspect that drives some elements of research, nor ignore the fact that ‘over time some technological practices become so entrenched in society that it becomes difficult to do things differently’ (Feng, 2000, p. 213). These factors combined mean we ought to consider such questions from the earliest stages of conception and design onwards, alongside related issues pertaining to freedom, dignity, privacy, security, consent, trust, control and equity of access.

As Feng (2000, p. 213) notes, ‘early on in the design process technologies are often malleable enough to be produced and implemented in a number of ways. Hence the need for ethical discussion to take place early on in the design of technologies’. Yet, and despite the abundance of questions offered above, current research on implants typically addresses either very general philosophical/ethical issues or practical issues (e.g. technical or regulatory) arising from these technologies. While the former is often discussed within philosophy, humanities and social sciences, the latter often comes from within industry, applied research, regulation and insurance. In fact the absence of sufficient (and multi–disciplinary) discussion on these topics means that fundamental ethical and conceptual issues arising are not always being considered, and are playing insubstantial roles in enhancing innovation and development of such technologies, both positively and by indicating limits.

How do we balance benefit and risk with regard to advancements in implant technology, and how far can, or should, existing regulation go (whether at pre– or post–production stage) with regard to the use of converging technologies for the development and use of implants? What, if any, role might the precautionary principle play here, and how might we regulate for future possible unknown risks without stifling technological and scientific creativity? I suggest that to fully engage with these issues we must proceed with both optimism and caution, and accept that thinking ethically means there may be no easy or simple answers.

The author is very grateful to Prof Geoff Hunt who was instrumental in the production of this work. Any errors are however entirely my own.


Burger, R. (2002). Enhancing personal area sensory and social communication through converging technologies, in Roco, M. C. and Bainbridge, W.S. (eds.).
Carmena, J. M., Lebedev, M. A., Crist, R. E., O’Doherty, J. E., Santucci, D. M., Dimitrov, D. F., Patil, P. G., Henriquez, C. S., Nicolelis, M. A. L. (2003). Learning to control a brain–machine interface for reaching and grasping by primates. PLoS Biology. 1: 193–208.
Eberhart, R. C. and Shi, Y., (2001). Particle swarm optimisation; developments, applications and resources’, Proc. IEEE Int. Conf. Evolutionary Computation. 1: 81–86.
European Group on Ethics (EGE) ‘Opinion No 20: Ethical Aspects of ICT Implants in the Human Body’. Presented to the European Commission in March 2005.
Feng, P. (2000). Rethinking technology, revitalizing ethics: overcoming barriers to ethical design. Science and Engineering Ethics. 6: 2. 207–220.
Fiedeler, U. and Krings, B. J. (2006). Naturalness and neuronal implants – changes in the perception of human beings. MPRA Paper presented at EASST–conference, University Library of Munich, Germany.
Graham–Rowe, D. (2003). ‘World’s first brain prosthesis revealed’, New Scientist 12 March 2003. Online: [accessed 27/06/11].
Hunt, G. and Riediker, M. (2011). Building expert consensus on problems of uncertainty and complexity in nanomaterial safety. Nanotechnology Perceptions. Vol. 7. forthcoming.
Kosta, E., Bowman, D. M. (2011). Treating or tracking? regulatory challenges of nano‚Äźenabled ICT implants. Law and Policy. 33: 2. 256–275.
McDonough, M. and Braungar, M. (2002). Cradle to cradle: remaking the way we make things. New York: North Point Press.
Nsanze, F. (2005). ICT implants in the human body: a review. In Opinion No 20: Ethical Aspects of ICT Implants in the Human Body. Presented to the European Commission in March 2005.
Roco, M. C., Bainbridge, W. S. (eds.). (2002). Converging technologies for improving human performance: nanotechnology, biotechnology, information technology and cognitive science. National Science Foundation, Arlington, Virginia, USA. [accessed 01/07/11].
Soekadar, S., Haagen, K., Birbaumer, N. (2008). Brain–computer interfaces (BCI): restoration of movement and thought from neuroelectric and metabolic brain activity. In Coordination: Neural, Behavioral and Social Dynamics. 229–252.
Strydis, C., Gaydadjiev, G. N. (2008). The case for a Generic Implant Processor. 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC’08), August 2008. 3186–3191.
Tseng, G. Y., Ellenbogen, J. C. (2001). Toward nanocomputers. Science. 294: 5545. 1293–1294.
Wickson, F., Grieger, K., Baun, A. (2010). Nature and nanotechnology: science, ideology and policy. International Journal of Emerging Technologies and Society. 8: 1. 5–23.



There have been 0 replies to this Article. + Post your comment here.

All opinions are welcome but comments are checked to ensure they are not abusive or profane

This is a spam prevention measure!