Forum Stats
660 Members
57 Forums
264 Topics
547 Posts

Max Online: 68 @ 05/26/08 08:58 PM
(Views)Popular Topics
Deaf and Dumb/deaf mute???? 4747
New sign language helps the deaf to become hands.. 4588
HoH 3419
met a deaf man 3282
A compelling desire of Deafness 2896
School Frustration issue 2881
A proposal: Deaf Americans' Civil Rights Act 2799
Riverbank Call Center Workers Face Layoffs 2280
Charles Wildbank: The Language of Color 2180
The Harm of Noise Pollution for Human Health 2166
Featured Member
Registered: 03/03/10
Posts: 0
Newest Members
phoenixlove56, barbie8100, Jalie, PortNumber53, Daphne
660 Registered Users



Topic Options
Rate This Topic
#496 - 03/05/09 07:09 AM Man, machine and in between
SweetMind Offline
Active Member
Registered: 09/11/07
Posts: 201
Loc: Mother Nature world
http://www.nature.com/nature/journal/v457/n7233/full/4571080a.html

Man, machine and in between

Jens Clausen1

1. Jens Clausen is at the Institute of Ethics and History in Medicine, University of Tübingen, Tübingen, Germany. 
Email: jens.clausen@uni-tuebingen.de

Brain-implantable devices have a promising future. Key safety issues must be resolved, but the ethics of this new technology present few totally new challenges, says Jens Clausen.

D. PUDLES
We are so surrounded by gadgetry that it is sometimes hard to tell where devices end and people begin. From computers and scanners to multifarious mobile devices, an increasing number of humans spend much of their conscious lives interacting with the world through electronics, the only barrier between brain and machine being the senses — sight, sound and touch — through which humans and devices interface. But remove those senses from the equation, and electronic devices can become our eyes and ears and even our arms and legs, taking in the world around us and interacting with it through man-made software and hardware.

This is no future prediction; it is already happening. Brain–machine interfaces are clinically well established in restoring hearing perception through cochlear implants, for example. And patients with end-stage Parkinson's disease can be treated with deep brain stimulation (DBS) (see 'Human brain–machine applications'). Worldwide, more than 30,000 implants have reportedly been made to control the severe motor symptoms of this disease. Current experiments on neural prosthetics point to the enormous future potential of such devices, whether as retinal or brainstem implants for the blind or as brain-recording devices for controlling prostheses1.

Non-invasive brain–machine interfaces based on electroencephalogram recordings have restored communication skills of patients 'locked in' by paralysis2. Animal research and some human studies3 suggest that full control of artificial limbs in real time could further offer the paralysed an opportunity to grasp or even to stand and walk on brain-controlled, artificial legs, albeit likely through invasive means, with electrodes implanted directly in the brain.

Future advances in neurosciences together with miniaturization of microelectronic devices will make possible more widespread application of brain–machine interfaces. Melding brain and machine makes the latter an integral part of the individual. This could be seen to challenge our notions of personhood and moral agency. And the question will certainly loom that if functions can be restored for those in need, is it right to use these technologies to enhance the abilities of healthy individuals? It is essential that devices are safe to use and pose few risks to the individual. But the ethical problems that these technologies pose are not vastly different from those presented by existing therapies such as antidepressants. Although the technologies and situations that brain–machine interfacing devices present might seem new and unfamiliar, most of the ethical questions raised pose few new challenges.

Welcome to the machine

In brain-controlled prosthetic devices, signals from the brain are decoded by a computer that sits in the device. These signals are then used to predict what a user intends to do. Invariably, predictions will sometimes fail and this could lead to dangerous, or at the very least embarrassing, situations. Who is responsible for involuntary acts? Is it the fault of the computer or the user? Will a user need some kind of driver's licence and obligatory insurance to operate a prosthesis?

Fortunately, there are precedents for dealing with liability when biology and technology fail to work. Increasing knowledge of human genetics, for example, led to attempts to reject criminal responsibility that were based on the inappropriate belief that genes predetermine actions. These attempts failed, and neuroscientific pursuits seem similarly unlikely to overturn views on human free will and responsibility4. Moreover, humans are often in control of dangerous and unpredictable tools such as cars and guns. Brain–machine interfaces represent a highly sophisticated case of tool use, but they are still just that. In the eyes of the law, responsibility should not be much harder to disentangle.

But what if machines change the brain? Evidence from early brain-stimulation experiments done half a century ago suggests that sending a current into the brain may cause shifts in personality and alterations in behaviour. Many patients with Parkinson's disease who have motor complications that are no longer manageable through medication report significant benefits from DBS. Nevertheless, compared with the best drug therapy, DBS for Parkinson's disease has shown a greater incidence of serious adverse effects such as nervous system and psychiatric disorders5 and a higher suicide rate6. Case studies revealed hypomania and personality changes of which the patients were unaware, and which disrupted family relationships before the stimulation parameters were readjusted7.

Such examples illustrate the possible dramatic side effects of DBS, but subtler effects are also possible. Even without stimulation, mere recording devices such as brain-controlled motor prostheses may alter the patient's personality. Patients will need to be trained in generating the appropriate neural signals to direct the prosthetic limb. Doing so might have slight effects on mood or memory function or impair speech control.

Nevertheless, this does not illustrate a new ethical problem. Side effects are common in most medical interventions, including treatment with psychoactive drugs. In 2004, for example, the US Food and Drug Administration told drug manufacturers to print warnings on certain antidepressants about the short-term increased risk of suicide in adolescents using them, and required increased monitoring of young people as they started medication. In the case of neuroprostheses, such potential safety issues should be identified and dealt with as soon as possible. The classic approach of biomedical ethics is to weigh the benefits for the patient against the risk of the intervention and to respect the patient's autonomous decisions8. This should also hold for the proposed expansion of DBS to treat patients with psychiatric disorders9.
Bench, bedside and brain

The availability of such technologies has already begun to cause friction. For example, many in the deaf community have rejected cochlear implants. Such individuals do not regard deafness as a disability that needs to be corrected, instead holding that it is a part of their life and their cultural identity. To them, cochlear implants are regarded as an enhancement beyond normal functioning.

What is enhancement and what is treatment depends on defining normality and disease, and this is notoriously difficult. For example, Christopher Boorse, a philosopher at the University of Delaware in Newark, defines disease as a statistical deviation from "species-typical functioning"10. As deafness is measurably different from the norm, it is thus considered disease. The definition is influential and has been used as a criterion for allocation of medical resources11. From this perspective, the intended medical application of cochlear implants seems ethically unproblematic. Nevertheless, Anita Silvers, a philosopher at San Francisco State University in California and a disability scholar and activist, has described such treatments as a "tyranny of the normal"12, designed to adjust people who are deaf to a world designed by the hearing, ultimately implying the inferiority of deafness.

Electronic devices can become our eyes and ears and even our arms and legs.
Although many have expressed excitement at the expanded development and testing of brain–machine interface devices to enhance otherwise deficient abilities, Silvers suspects that prostheses could be used for a "policy of normalizing". We should take these concerns seriously, but they should not prevent further research on brain–machine interfaces.

Brain technologies should be presented as one option, but not the only solution, for paralysis or deafness. Still, whether brain-technological applications are a proper option remains dependent on technological developments and on addressing important safety issues.

One issue that is perhaps more pressing is how to ensure that risks are minimized during research. Animal experimentation will probably not address the full extent of psychological and neurological effects that implantable brain–machine interfaces could have. Research on human subjects will be needed, but testing neuronal motor prostheses in healthy people is ethically unjustifiable because of the risk of bleeding, swelling, inflammation and other, unknown, long-term effects.

People with paralysis, who might benefit most from this research, are also not the most appropriate research subjects. Because of the very limited medical possibilities and often severe disabilities, such individuals may be vulnerable to taking on undue risk. Most suitable for research into brain–machine interface devices are patients who already have an electrode implanted for other reasons, as is sometimes the case in presurgical diagnosis for epilepsy. Because they face the lowest additional risk of the research setting and will not rest their decision on false hopes, such patients should be the first to be considered for research13.

Brain–machine interfaces promise therapeutic benefit and should be pursued. Yes, the technologies pose ethical challenges, but these are conceptually similar to those that bioethicists have addressed for other realms of therapy. Ethics is well prepared to deal with the questions in parallel to and in cooperation with the neuroscientific research.

List of references that is in the article that you can check it out yourself if you like to know.
_________________________
"Light of Love"in our ASL culture. ASL is a form of speech and gives LOVE for all humanity kids. smile
Top
#525 - 04/20/09 07:15 PM Re: Man, machine and in between [Re: SweetMind]
CSN Offline
Active Member
Registered: 10/09/07
Posts: 162
Loc: Omaha, NE
Sweetmind,

Interesting topic! "Interesting"? You betcha. smile This is in light of programs seen on PBS about man-machine. Oops! This is not to leave out 'women'. But I think you may understand what I mean.

True, brain implantable devices create more questions than they answer. They reveal the complexities of the brain and neural systems.

Quote:
... conscious lives interacting with the world through electronics.


Thus, it is stated that lives are interacting but not dependent upon electronics. As human beings we are in control of our lives and make conscious use of electronics among countless other things. As I type this out I am consciously making use of electronics. As a person reads this they are likewise making use of electronics. One element of this is that it is a conscious choice made by both me and/or anyone choosing to read this.

While electronics may be used to suplement human functioning there is no way that electronics can replace human functioning. For one thing, all electronics has a limited life-span. Comparably, physical parts of the human body can and do last an entire lifespan. Supplementing human functioning is OK as long as those wishing to do so realize that there will be issues when attempts are made to replace natural human physical functioning with constructed mechanisms.

The interface of brain-machine is acceptable and understandable under certain circumstances but not under all circumstances. Should someone assert that 'under all circumstances' is acceptable then they are misunderstanding the functioning of the human brain. For example, there are certain functions that are under use 24/7. The flow of blood would be an example. Other functions have less demand for useage. Examples would be sensory functioning. Many persons (myself included LOL) have fallen asleep during a TV show. Yet, the broadcast continued! Thus, sensory perception are not essential.

While 'non-invasive brain-machine interfaces' exist today they are questionable in terms of accuracy and validity. Further, many, many times the results are interpreted for the patient by the physician. Thus, the human capacity to formulate conclusions based on scientific data is what is beyond the capacity of the machine. What the machine can do is what it has been designed and/or programed to do. IOW, the machine - when designed and constructed is a product of human thought!

Further, the human body does not know that certain electronics or constructed mechanisms are designed to do away with certain aspects. The human body only recognizes something foreign as just that - foreign! Therefore, the human body will fight against whatever it is. Hence, anti-rejection medications. Such things are useful in terms of life-sustaining mechanisms such as an artificial heart.

Ethical onsiderations have always posed questions concerning the human body. And they always will. Which is a good thing, too. We - as both a society and a human species - should always seek to understand, always seek to ask questions concerning ethics.

Quote:
Who is responsible for involuntary acts? Is it the fault of the computer or the user?


This then poses many more questions. If the individual is not considered to be responsible, then who is? This is asuming, of course, that actions are held to be the responsibility of someone. In Western societies actions are held to be the reponsibility of someone. An act by a human being is not the responsibility of a non-human entity. Taking this question one step further, is the responsibility that of the computer designer? The computer programmer? Who then is ultimately responsible for actions and/or inactions?

Quote:
Such examples illustrate the possible dramatic side effects of DBS, but subtler effects are also possible.


Anytime the brain is disrupted - even with the noblest of intentions - there will be effects both overt and subtle. One point that I may have made previously but applies here is that "The more we know, the more we know that we don't know". Thus, we may believe that certain neural functioning happens in a certain manner with predicable results, but actually we are only guessing. The human brain is far too complex to be described in a simplistic manner.

When technology is based on prejudice against certain functions then it is null and void in terms of validity.

Defing disease as a statistical variation removes the human element. Describing diseases in terms of statistics does have some validity. However, it is the manner in which these statistics are interpreted that gives them meaning. For instance, a certain medical procedure may adversely affect "only" 10% of those undergoing it. Yet, to a person adversely affected, the rate is 100%! Therefore, statistics must be carefully reviewed. Further, statistics only give a glimpse of the situation at the time the report was written. Statistics have low predictive validity. Further, they need to be interpreted in terms of comparison with existing standards.

I would suggest to Mr. Boorse that he attend a meeting of a Deaf Club. There, he will find total communication by those he may have thought not capable of this because of being deaf. However, he will be sharply awakened when he realizes that the communication is total, and that there is no need for devices. Further, he will realize that deafness does not need a "cure". The only thing needing to be "cured" are asinine atitudes based on prejudice.

I agree - to an extent. We should not stop researching the brain and/or its functioning. This will help us to understand the complexities of the brain, the specific functioning, and the overall functioning of the human body. We have only begun to understand the complexities of ther brain and the neurological systems.

One thing about the use of animals to study human functioning is that there are ethical considerations when using animals. Oops! I shoulda made that "Two things...". The second thing is that as we go from animal to human the complexity increases at a geometric progression. Humans are far more complex than even chimps that are "near human".

Those with paralysis will agree that some procedures and/or instruments work while others with the same procedure and/or instrument will find that it does not work! Because of the complexity of the human body this is almost a given. Further, as adults they are capable of making a rational decision taking into consideration where they are in their life now, and what they believe their life will be in the future. Children are not capable of this.
Top
#527 - 04/22/09 12:51 AM Re: Man, machine and in between [Re: SweetMind]
CSN Offline
Active Member
Registered: 10/09/07
Posts: 162
Loc: Omaha, NE
Sweetmind,

My comments about this -

Quote:
We are so surrounded by gadgetry that it is sometimes hard to tell where devices end and people begin.


This is especially true when devices are chosen to be a part of one's life. Then, their personal identity is affected, sometimes in a negative manner. This would be so if a person's identity is heavily based on their use of devices. This would be true if the devices are chosen by the individual or imposed by others.

Quote:
....the only barrier between brain and machine being the senses — sight, sound and touch — through which humans and devices interface.


Actually, the senses are not a barrier to the use of technology and electronics; rather, the semses make use of electronics in an organized manner that has been learned. Further, by the use of human decision-making as primary the ultimate computer - brain - is used to the benefit of the idividual as well as society in general.

Restoring perceptions is not the same as restoring the original mecanisms and - more importantly - the results of those mechanisms.

Quote:
Animal research and some human studies3 suggest that full control of artificial limbs in real time could further offer the paralysed an opportunity to grasp or even to stand and walk on
brain-controlled, artificial legs, albeit likely through invasive means, with electrodes implanted directly in the brain.


Full control of artificial limbs has a different definition to differing people. Further, his 'full control' will take intensive training and is questionable as to when and for how long this is
possible. Another question surrounds the issue of side-effects of "full control".

Quote:
This could be seen to challenge our notions of personhood and moral agency.


The notions of 'personhood' and 'moral agency' have confronted human beings since the beginning of time. These have always been dealt with in ways that are different in different times. Further, the results of such questioning have never had full definition. Maybe this is better off left unanswered. For if we think we have resolved these issues then we have given up on something that makes us more than human beings, something that makes us persons.

Quote:
Invariably, predictions will sometimes fail and this could lead to dangerous, or at the very least embarrassing, situations. Who is
responsible for involuntary acts?


That these predictions will "sometimes" fail goes against the predictions of use of computers; further, making use of the human brain
invariably results what is "sometimes" a failure. In this sense then, the machine is no better than the most powerful machine there is - the human brain.

Quote:
Increasing knowledge of human genetics, for example, led to attempts to reject criminal responsibility that were based on the
inappropriate belief that genes predetermine actions.


Therefore, the human resposibility cannot be assigned to genetics; the human responsibility needs to be with the human beings! We are responsible for our actions and/or inactions. We cannot asign responsibility to machines. If they are not responsible for human functioning, then who and/or what is responsible? Could it be
that as human beings we are responsible for our own actions? Yes!!

Quote:
Nevertheless, compared with the best drug therapy, DBS for Parkinson's disease has shown a greater incidence of serious adverse effects such as nervous system and psychiatric disorders5 and a higher suicide rate.


DBS is a dangerous things to guess at. The effects are not predictable, and the "high suicide rate" makes it highly questionable. If something
that is "supposed to" be of assistance actually results in suicide then it is highly unethical to further the use of it.

Quote:
Even without stimulation, mere recording devices such as brain-controlled motor prostheses may alter the patient's personality


Having an adverse effect upon a patient's personality makes the procedure highly questionable and unethical.

Quote:
In 2004, for example, the US Food and Drug Administration told drug manufacturers to print warnings on certain antidepressants about the short-term increased risk of suicide in adolescents using them, and required increased monitoring of young people as they started
medication.


One issue that is not addressed is that what is thought to be an appropriate dosage of a medication is based on highly generalized data.
IOW, what may seem to be the appropriate dosage for a specific condition may be a harmful dosage to a certain group or to a specific
individual.

Further, it is highly unethical to prescribe medications to adolescents without taking into account the fact that their body is still
developing. Which includes their neural mechanisms and their brain.

Quote:
As deafness is measurably different from the norm, it is thus considered disease.


Being measurably different from the norm does not make something a disease. The concept of what is a "disease" is based on ones personal values. Therefore, a definition that is applicable to all is impossible! Should a significant number of persons affected in a manner that is off the statistical norm, does that then constitute a
"disease"? In this sense, because a certain belief is 'measurably different from the norm', does that make it a disease?

Quote:
Still, whether brain-technological applications are a proper option remains dependent on technological developments and on addressing important safety issues.


Actually, this remains more dependent upon the values and beliefs of the individual.

Quote:
Animal experimentation will probably not address the full extent of psychological and neurological effects that implantable brain–machine interfaces could have.


These issues need to be addressed fully. Also, with the complexity of the human brain being far outweighing the complexity of any entity or
animal, the results of animal experimentation cannot be generalized to applicability with human beings.

Quote:
Yes, the technologies pose ethical challenges, but these are conceptually similar to those that bioethicists have addressed for other realms of therapy.


The ethical questions that are asked raise further questions when supposed 'answers' are found. The answers point out the complexity of the issue and raise further questions. However, we should never stop asking these questions. In doing so, we are pointing out that we are using that which is most complex and least understood - our brains!
Top
#530 - 04/23/09 10:03 PM Re: Man, machine and in between [Re: SweetMind]
CSN Offline
Active Member
Registered: 10/09/07
Posts: 162
Loc: Omaha, NE
Sweetmind,

I tend to question brain-implantable devices because of how little we actually know about the brain. With the brain being so complex I wonder if we can ever formulate what we know and what we will in the future know into a viable concept. This gets into the idea that the more we know the more we know that we don't know. We are limited by the accuracy of our machinery *and* ideas.

True, safety issues need to be resolved. Accuracy, safety, and applicability are only a few of the areas that remain under question. These questions will remain for the rest of time.

The question of person-device is easily answered. However, the question of human being-device is more questionable. The difference being (one difference) that no machine can have either originality or emotion. By removing the senses from the equation, we are removing persons from the equation. No manufactuered device can perform the actions of the senses as accurately as the original equipment.

I have seen the use of artificial arms and hands. The ability to grasp a pop can is there but the ability to control the details of the grasp such as intensity and finite accuracy of grasp are not. Nor is the accuracy there to open the can!

Going from animal research to human beings is not an accurate thing to do. We cannot generalize the results using animals as research subjects and apply them to human beings. The complexity of human beings is a quantum leap from the complexity of animals.

Quote:
And the question will certainly loom that if functions can be restored for those in need, is it right to use these technologies to enhance the abilities of healthy individuals?


The unasked question in this regard is that of how is "need" determined? Is "need" determined by the individual? Or is it determined by others? If it is determined by "others" is there a standard by which this is determined? If the individual meets the criteria of the standard and yet chooses not to see things in this manner, which view is accurate?

If the technology is used to enhance healthy individuals when the technology is designed for those who are considered "disabled" does this not mean that healthy individuals and so-called "disabled" individuals are not different? If this is the case, what then is the rationale for considering someone to be disabled? The rationale is opinion! Therefore, deaf are not to be considered "disabled" unless this is seen as opinion!

Quote:
These signals are then used to predict what a user intends to do.


The prediction is mathmatical and acording to formula. As a result, the human factor or choice (and/or changing that choice) is disregarded.

Quote:
... the inappropriate belief that genes predetermine actions.


Correct, thia was an inappropriate belief and continues to be do.

Quote:
Doing so might have slight effects on mood or memory function or impair speech control.


Thus, the behavior and/or emotion of the human being are at risk.

Quote:
Still, whether brain-technological applications are a proper option remains dependent on technological developments and on addressing important safety issues.


Further, this remains dependent upon the issue of choice. Also, the issue of interaction with medication remains. But the biggest issue is that of those D/deaf not seeing being deaf as needing such imposition. Especially when considering children. Once again, are we to take away their ability and their option of choice?
Top
#533 - 04/29/09 06:26 PM Re: Man, machine and in between [Re: SweetMind]
CSN Offline
Active Member
Registered: 10/09/07
Posts: 162
Loc: Omaha, NE
Sweetmind,

Brain-implantable devices may or may not have a promising future. This depends largely on what we know about the brain, the neural systems, and how this all interrelates with the body - both as a whole and in terms of differing systems. What we know now may seem as if we know how things are interconnected, but the questions remain - is what we know now valid, and how is what we know useful? We can apply what we believe is accurate; however, there is always a margin of error that any scientist beyond the introductory level would be familiar with.

It may or may not be "hard to tell" where the device is separated from the human being. In medicine devices are used to save lives (yet this application is obviously not 100% effective), and/or to supplement lives (which is not 100%
effective either). An example of this would be pacemakers. As necessary and sufficient as they may be, they are still subject to errors in implantation and use. Thus, the dichotomy of human - machine is evident.

Correct, many, many humans do interact with the world through the use of electronics. It is the choice of the human being as to how, when, where, why this interaction takes place. The human being has a choice as to any of
these variables.

If we remove "these senses" then we take away the human choice and option to use the senses!

Quote:
But remove those senses from the equation, and electronic devices can become our eyes and ears and even our arms and legs, taking in the world around us and interacting with it through man-made software and hardware.


If we remove the senses from the equation then we are altering the equation in itself. The assumption is made that electronic devices can substitute. Which is erroneous. There is nothing that is constructed that can take the place of that which occurs naturally. Should man-made software and hardware attempt to take the place of what occurs naturally then we become robots. Hardly a humanistic thing to do! Further, should software and hardware (both of which are constructed) be used to replace that which occurs naturally the human body's tendency to defend itself against foreign intrusion (no matter what the intent is) will be activated. This can be fought against with certain drugs and/or chemicals. Then, the whole issue is complicated beyond natural reasoning.

Quote:
Worldwide, more than 30,000 implants have reportedly been made to control the severe motor symptoms of this disease. Current experiments on neural prosthetics point to the enormous future potential of such devices, whether as
retinal or brainstem implants for the blind or as brain-recording devices for controlling prostheses1.


By controling the symptoms the underlying issue of Parkinson's is still there. The question of whether or not the person still has Parkinson's is moot.

Quote:
Animal research and some human studies3 suggest that full control of artificial limbs in real time could further offer the paralysed an opportunity and walk on brain-controlled, artificial legs, albeit likely through invasive means, with electrodes implanted directly in the brain.


There are therapies offered now that do not require surgery (which in itself has a margin of error). These therapies are offered by the Veterans Administration. Further, there have been persons with artificial limbs who have run in 5K, 10K, and even marathons. These have been used without electronic stimulation (they require therapy, as does the use of electonics. An interesting book about a Vietnam veteran who had a leg amputated and has been successful in this regard is Waltzing Matilda by former Senator Bob Kerrey.

Quote:
Melding brain and machine makes the latter an integral part of the individual. This could be seen to challenge our notions of personhood and moral agency.


This would depend on how "the individual" is defined both now and at a future point. The question of 'moral agency' complicates the question of responsibility for actions, thoughts, beliefs, etc...

Quote:
Invariably, predictions will sometimes fail and this could lead to dangerous, or at the very least embarrassing, situations.


The question of responsibility for actions, beliefs, etc... is furthered by having the option of blaming the machine for what happens. The question of "danger" is compounded by the possibility of being of danger to others while having the option of irresponsibility. This is especially harmful in terms of chidren who are beginning to learn that they are responsible for what they do. If they can blame a computer or a machine, then how do they internalize
the 'responsibility for self ' that is needed for maturity?

Quote:
But what if machines change the brain? Evidence from early brain-stimulation experiments done half a century ago suggests that sending a current into the brain may cause shifts in personality and alterations in behaviour.


Machines have been changing the brain for some time now - electroshock therapy. After this, the person is changed, sometimes in a subtle manner, sometimes in an overt manner. However, the end-results are unpredictable even today!

Quote:
Nevertheless, compared with the best drug therapy, DBS for Parkinson's disease has shown a greater incidence of serious adverse effects such as nervous system and psychiatric disorders5 and a higher suicide rate6.


This would I hope causes persons to wonder if it is worth it to undergo DBS. With the results being permanent, negative, and compounded I question the efficacy of such treatment.

Quote:
What is enhancement and what is treatment depends on defining normality and disease, and this is notoriously difficult.


Defining normality is not that difficult in terms of deafness. Deafness itself is a naturally occuring phenomena. Even those not born deaf are mostly deaf because of something that is natural to human beings. Should this be action or some other stimulus, the cause-efect scenario is there.

Quote:
For example, Christopher Boorse, a philosopher at the University of Delaware in Newark, defines disease as a statistical deviation from "species-typical functioning"10. As deafness is measurably different from the norm, it is thus considered disease.


In this definition heroism such as saving a child from a burning car would be a disease! This is a 'statistical deviation from species-typical functioning'. Human beings are more complex than to rely on definitions that apply to animals. This definition takes away the freedom of choice that most actions are. With the majority of persons being right-handed, does this mean that being left-handed is
a 'disease'? At one time this was the belief of many. However, this is no the belief of the majority (if any).

Quote:
... "policy of normalizing".


This would take away the freedom of choice by making the choice externally constructed. Further, this type policy denies the freedom-of-choice for those who do not see the pathological perspective as valid.

Quote:
Animal experimentation will probably not address the full extent of psychological and neurological effects that implantable brain–machine interfaces could have.


Animal experimentation cannot be generalized to humans because of the complexity of the human being.
Top


Moderator:  Matthew