bkdelong: (Default)
bkdelong ([personal profile] bkdelong) wrote2008-06-13 02:26 pm

BrainStream: Bio-Tech Augmentation or Full Techno-Cyborgian Implants?

Woah. Two BrainStreams in one week. A new record for the last year or two.

(And, yes, I REALLY like to be overly-bombastic in my grammatically-crippled word-creation when I do these things. It's my little "cherry on top".)

I do this silly thing when I get into large crowds. I "shut down" the fast spinning, erratic, multitasking hard drive that is my brain and go into "navigation and evasion" mode. It sounds very technical and like I'm programming a navigation computer but, in many ways, that's how I treat my brain.


I stop thinking about everything else, and imagine red tracking circles with labeled lines coming off them - much what one would expect to see in a Sci-Fi movie where the first-person view is replaced by what is assumed to be a Heads Up Display (HUD); be it a visor or cyborg eye.

I then pretend to analyze the traffic flow, predict the gap lengths and times and program a route that will enable me to keep moving at my present speed and manage to avoid plowing into someone. This is mostly done in T stations on the Green and Red lines, sometimes in Malls and, recently, walking through Times Square from The Roosevelt Hotel to Penn Station.

Anyway, the visualization seems to work well for the most part and unless I'm incredibly overtired it's one of the few times I can fully override my ADHD. The other is when something puts me in "incident response" mode but when that happens, it's VERY, very hard for me to "stand down" and focus on my usual tasks.

Times like those, I would KILL for augmented-reality, gesture-recognition, multi-input touch interfaces containing a symbiont AI that almost predict what I want to do before I move my hands in place. But I digress.

It occurred to be on the Green Line between North Station and Government Center this AM that, really, all I needed was an implant that translated the signals my brain was sending out to my various body parts (and vice-versa) and converted them to imagery sent to my visual cortex.

Think about it: you see the calculations, a message saying "go now" based on your instinctual urge to go. You see someone you think you recognize and the implant captures, sharpens and enhances the weak memory solidifying it into something recognizable.

In work/infosec terms - I need a good dashboard and reporting GUI for my Body Event and Information Management (BEIM), e.g. my brain. I mean, what does one's brain do but correlate the various events and information going on in your body in order to manage fast remediation and incident response. It just happens to be able to do it far faster than any SEIM correlating across a WAN EVER could (Security Event and Information Management and Wide Area Network for those still playing along). I wonder how many events-per-second the average adult human brain can manage.

(I can't believe I'm thinking about this from an information security perspective. It takes ALL the fun out of it.)

Since we don't truly understand our own bodies and rely on doctors to "diagnose" such an interface may be valuable to help us understand everything from when our blood sugar is low to some other impending or current medical event is occurring that needs addressing. It wouldn't hurt to map out what's going on in the body and basically providing our conscious self with insights that would allow us to make course correction for self-repair. Don't want it in live mode? Bluetooth/RFID/Broadcast-your-own-protocol to an external device for storage and later reading.

Based on your Personal Privacy Preferences and the degree of urgency for each potential issue, you could immediately upload certain information to your doctor for translation and action. Of course, with such diagnosis tools at your disposal, the need for a doctor to fully manage your health would become far less. I wouldn't say irrelevant but who knows.

I just don't know enough about the feature set of current nanoprobic body sensors (do any actually exist yet?) to see how much information we can gather about the body with something microscopic and untethered. heck...has any research even been done on broadcasting signals from a nanoprobe deep inside the human body to a device on the outside? Perhaps relaying through the nervous system? What if that's compromised? Maybe a set of waypoints along the way?

Ah, mad science. Love it.

[identity profile] spin1978.livejournal.com 2008-06-14 12:59 am (UTC)(link)
You know, BK, I'm trying to write a journal article at the moment! Just kidding. :)

First, without getting into any of the other topics, using something like Bluetooth/RFID to download data from implanted devices (whether available or hypothetical) is actually a pretty sweet idea. I'm not up to date on the current technology in medical device engineering (because I have this niggling tic in the back of my mind that says something along this lines or some other RF-related medical device is being worked on/being prepared for the clinic), but RF radiation is not as severely attenuated in human tissue as are other forms of EM radiation. Of course, I would imagine that one would need to develop an adequately sensitive receiver (considering that there are space limitations for sticking something into a person so you couldn't put in a huge power supply), but if the receiver is, say, a PDA-type device that you can latch onto your arm for a download of physiological/metabolic data. The converse is also true (which has after a fashion made it into practice) - use RF-using devices to effect changes in the person's body. (The practical application of just using RF radiation - no devices that I know of - for a health benefit is in cardiology, if I recall correctly.)

As far as the issue of brain/nervous system interfaces with electronics, my understanding of the field is that you can take it in a couple of different directions. There's the direct approach, where you actually try to interface neurons/neural tissue with electronics, necessitating invasive (surgical) methods. There's also the indirect approach, where you hope that multiple sensory inputs (such as electrode measurements from an EEG) can do the job. There was also an interesting report a while ago that people were able to play simple computer games based on their response in an MRI experiment. My impression of the direct/surgical methods is that outside of a few applications (attempting to restore some degree of sight/hearing, for instance), they're still in a research/developmental stage.

I haven't the foggiest actual clue about portable/wearable medical diagnostic technology at either the microscale or nanoscale levels, but I do really like the idea of letting everyone keep tabs on their health and well-being without being slave to their physician's office schedule/lab availability. The question here is how to get the maximum amount of reliable information with the least amount of encumberance. Trying to integrate it in or on a typical human body makes me ask lots of questions, but maybe something like an at-home "health monitoring" system might work really well, presuming one can do the underlying science. Maybe a single pinprick of blood that gets funneled into a microfluidics-based (eventually nanofluidics) "lab on a chip" system that can run a number of assays (which can be varied according to one's wishes), maybe the optical imaging gurus can devise some simple diagnostic things, and who knows what else could be miniaturized.

I envision the real issue to bringing it to the people here is the willingness of people to accept invasive procedures that implant devices that will last for some time versus non-invasive procedures that are "test/scan and be done with it." I come down on the non-invasive side of the camp, but if it was something like wearing a cap that had EEG sensors that could provide information, or a watch that could do some simple tests of actual use while pressed up against my wrist, I'd be all for it.

[identity profile] bkdelong.livejournal.com 2008-06-14 02:08 am (UTC)(link)
Heh. What's the topic of said article?

Regarding sticking in a power supply, you're right. I mean, we're still at the age of pacemakers and such. I need to really send these discussions over to the Transhumanism lists I'm on and have them weigh in. I saw a post go by about an upcoming Neural Interfaces conference but it's in Cleveland. And I can't find out what any of the posters are. Ah well.

AI in standard robots isn't smart enough to do what I visualize these nanobots doing...I think I have too many expectations that we won't see come to fruition for at least a century. Obviously the Sci-Fi dreamer sees bio-powered nanobots/probes. Which, let's face it...there's plenty of stuff to run off of in the human body when you get that small.

[identity profile] spin1978.livejournal.com 2008-06-14 02:12 pm (UTC)(link)
The article is about small molecule binding to a human metabolic enzyme (mostly localized in the liver and intestine) that is involved in the metabolism of about 50% of all drugs on the market. Some of the available information about the structure of the enzyme/small molecule complex does not make chemical sense (the small molecule is too far away for any chemistry to occur on a reasonable timescale), and we propose that it's (partially) due to the conditions where they obtained the original data. Anyway!

Cells naturally have a way to "convert" electrical potentials into chemical energy, I'm sure going from chemistry to electricity (at least in principle - the exact reverse may not work) would be one way, depending on the power requirements. Of course, one would have to make sure that you don't kill off the cell by draining too much energy. I unfortunately haven't kept up with too much of the literature in this field, but while the interior of cells are aqueous, they're not really watery, they're viscous and jampacked full of molecules and proteins and other cellular materials. So any hypothetical probes would need to be able to navigate this viscous environment. Of course, there's the fact that we might be able to exploit the inhomogeneous internal nature of the cell for some useful end, as it's not a perfectly symmetric environment.

I would imagine that faster progress might be found by trying to develop interfacing systems that are one-way rather than two-way and coupling them, rather than trying to develop two-way systems. So, for instance, instead of a single two-way channel, you have separate transmission and receiver channels (or, more generally, separate input and output channels).

One of the sticking points, I'd say, in really being able to implement any sort of engineering to the mind or in the body is that there's still quite a bit we don't know about the brain and mind, much less individual cells in the sort of rigorous physical detail we'd like to eventually possess. Obviously, it shouldn't stop us from trying to do things, but there's still a lot of basic information and knowledge that is still out there to be discovered.

P.S. - The Neural Interfaces conference looks interesting, from what I could deduce from the session titles.