bkdelong: (Default)
So I'm starting this at approx. 9:45pm ET about 15min after arriving home from tonight's BeanSec. I'm going to do my darnedest to capture in keywords the conversation of the night with pertinent links.

The key test here is, sans any ADHD meds in my system, trying to get a single stream of thought into this journal and then follow up with the links that will actually make the post interesting - and not getting sidetracked on any number of actual activities throughout my machine's desktop or the physical distractions now vying for my body's attention.

I owe a huge amount of this to Michael Katsevman aka Anateus, much to the disappointment of fellow BeanSec-ers and SecurityTwits Paul Davis, Zach Lanier, Jack Daniels, and a new attendee. Oliver Day left early to "go to class" but I'm sure he was scared off.



There was probably more but the problem was I had to get from the Middlesex Lounge to my car at MIT and home to Salem. Due to a lack of my most-wished for futuristic communications lazyware (subvocal recognition) plus a continued increase in brainstream noise lead to a near impossible ability to focus.

Made it though....enjoy.
bkdelong: (Default)

Woah. Two BrainStreams in one week. A new record for the last year or two.

(And, yes, I REALLY like to be overly-bombastic in my grammatically-crippled word-creation when I do these things. It's my little "cherry on top".)

I do this silly thing when I get into large crowds. I "shut down" the fast spinning, erratic, multitasking hard drive that is my brain and go into "navigation and evasion" mode. It sounds very technical and like I'm programming a navigation computer but, in many ways, that's how I treat my brain.

Read more... )
bkdelong: (Default)

I've been waiting to write about this all morning (via BoingBoing). It seems a bunch of intrepid researchers at the Rehabilitation Institute of Chicago have taken work done on subvocal recognition (SVR) and applied it in such a fashion that one can drive/control a wheelchair through "thinking" about where they want it to move.

(NB: I've contributed heavily to the SVR wikipedia entry and liberally link to said service below. I acknowledge that most entries are not that of expert research so YMMV.)

I first got excited about SVR after reading Cory Doctorow's "Down and Out in the Magic Kingdom" in which the protagonist makes phone calls and interacts with his PDA subvocally. That is to say, the device detects electric signals sent from his brain to his larynx which would normally be translated into speech in the vocal tract. NASA and DARPA both had done some work in the past few years through the subject wearing a collar which detects the signals.

I think my endgame with SVR is really what Cory had in mind in "Down and Out" - be able to not only talk silently on the phone but to control a PDA, surf the web etc.

The phone hardware or service would convert the signals to speech - think of it; ring-tones now and in the future voice profiles - make your voice sound like anyone in any language. Somehow I don't think the Intelligence Community would appreciate that. It makes it difficult to determine who is who on the phone and would kill Voice Stress Analysis. I'm guessing the FCC and/or the ITU would want to have some say in how that works. Other issues include protecting ones own voice profile and the damage this could do to voice-based biometrics.

I wouldn't worry - voice recognition as it stands today is still complete rubbish ([livejournal.com profile] kestrell any thoughts on that from the accessibility side of things?). I still think the need to continually train any new piece of voice recognition software is a waste of time and they should all be required to adhere to a voice profile standard. Said standard would, in theory, allow you to train something once and export to an open format importing it into the same software (got to love the lack of portability of licenses!) or any other similar products. At a minimum, it should at least bring the user half-way to training the program to work well. The big risk again is someone stealing this profile to break voice-based biometrics. Surely that can be mitigated with some nifty encrypted token of some-sort with decent key recovery.

There has been some interesting work done on voice translation technology for the soldiers in the streets of Iraq that would have commercial applications in the future. Think of babelfish meets voice recognition and be able to speak with anyone regardless of their native language. Can you say "Universal Translator" ?

But back to the sci-fi. Combine subvocal recognition with augmented reality, GPS and gesture recognition (think Tom Cruise's manipulation of a multi-input virtual screen in "Minority Report") tied into a Net-connected PDA with a Head-Mounted Display (HMD). Hot damn.

The SVR would allow you to interact with the PDA silently. The GPS combined with augmented reality as visualized through the PDA's HMD would allow the user to view the entire physical world as annotated by Semantic Web, metadata addicts similar to some of the art installations touched upon in William Gibson's "Spook Country" (current reading). The gesture recognition which would be determined through sensors on clothes as well as the HMD would allow one to interact with the augmented reality to manipulate the objects only existent in the geotagged cyberspace environment. Absolutely mind-blowing.

Ye gods I love taking lunch to paddle down a brain-stream.

bkdelong: (Default)

Here's quite the stream of consciousness from daydreaming on the train ride home yesterday. I went straight up to my office and "leaked" it out into Word before I lost it.

Usual disclaimer: None of this has been edited or researched. Putting this together at present would be nigh impossible but there are a lot of components out there to make a lot of this plausible. This is raw thought.

"May 10, 2007

5:44pm

Robots - about the size of the extended human hand
look like balls of stainless steel or some sort of polymer/titanium
must be durable and light

can have spider-like legs all over...so walking on bottom 3-4....top 3-4 retracted or being used as air sensors / arms.

key is that if it gets on its back, it can walk immediately.

All legs are retractable...micro filaments that "stick" to things like spiders?

Like Ian Mcdonalds 'slow missles"

used for specific missions. 5-10 are released in a field do assess flamability. they each go off in 4-6 directions to establish a grid set at the start by GPS or physical limits. Should be able to go anywhere....higher end bots would be amphibious.

computer data and "sensor" attachments are loaded depending on mission. Bots normally stay in docking station type environment where they can use arms to self assess, self-repair, stay charged and used to load data and sensor attachments.

docking station also durable and can be left in the wild, kept charged through solar, wind or other means. Allows for some extension of standalone lifetime of bots to repair, recharge and update data or download data

bots can create internal, secure asyncronous data. Ideally, one is never not connected to another and the whole 5-10 grouping should all be connected at all times. Else closed one to last position should go cautiously "check" within reason.

Needs to be some sort of survival mode if bots get attacked.

Look at antlion - full arms retract and bot shoots piston deep into ground from belly then spins body or pulls arms out from vertical piston to dig hole. All that stays on the surface would be camo'd antennea and solar device for charging. Needs energy conserving mode.

Bot should be able to climb tree, attach to branch or crawl in hole, retract into ball and stay tight until "safe". These are scouts...not weapons

Bots should be able to connect by any means possible - open wireless networks, bluetooth, packet radio, rfid, possibly even microwaves? probably too high power. When one bot connects to an external entity, it examines available bandwidth and (if possible) establishes a closed VPN through it to "home". Ideally it would be able to do bandwidth prioritization and not use up everything on host device unless emergency.

Base station would have added sat capabilities and depending on carefully calculated energy availability, could be the lifeline for the entire bot network to pass data back and forth. Standalone scenario should be use if base station needs to power down, botnet looses connectivity to the base station, emergency etc.

a simple bot activity could be to act as a mobile repeater. Possibility of bots to connect to each other like Voltron for larger tasks but the use of bots to continue seeking higher locations to maximize signal range.

if possible, specialized bots and base should be EMP shielded to allow for continued operation in the case of a nuclear incident.

bots are loaded with data sets and sensor attachments according to mission. Could be as simple as catalogue all flora and fauna in this sector ....use cameras to "count" wildlife and recognition library to name them. Could be used to constantly assess the fire danger of an area or to survey for other reasons. Could be used for S&R in all terrain (specialized bots) including swamps. Possibility for high-temp resistant bots to work in forest fires, shielding all equipment until in a cool zone.

For longer missions, certain basestations could "house" bots, possibly residing mostly underground for safety. geothermal as possible power source?

Obviously botnet could be used in times of war and war situations as scouts. If small enough and properly camoflauged, they can go unnoticed and be used for a multitude of purposes including night scouting, IED detection and possible disarmament...forward sighting etc.

Specialized versions of bots could collect various samples...would need to determine clean way to certify forensically....military bots would need to be temptest shielded....
"

bkdelong: (Default)

Well, I emailed Dennis Crowley from dodgeball.google my idea for using mobile phones for social applications while stuck on various forms of mass transit. No response, but I'm sure he's a busy bee.

I've been thinking further about it, even expanding it pretty largely. I see a few applications:

  1. Live Transit Data - since I've been riding the MBTA Commuter Rail on a regular basis, I've noticed the train is predictibly late; especially the 7:29am Newburyport/Rockport line into North Station. So why not create a service that allows various commuters to text/SMS/IM/email in, (or use a mobile web-based inteface), when they arrive at a station and when they leave a station?

    Several things could be done with that data. Update a Google Map live with a train's location. At the "estimated" live location point, have a small icon with the train's number, (in the case of the Inbound 7:29am train from Salem, 108) and show a train with timed waypoints.

    I gave it a shot with my horrible, second-hand, beaten-up Verizon Wireless LG phone and sent myself emails everytime we arrived at a stop. My thumb was sore from lack of experience and it would have helped more if I was able to send something both when we arrived & left but it was a small experiment proving that this is a feasible idea.

    Keep a history of all these times for referring back to and even mark them up with XML so one can reference a particular date, time, and period between stations. Also, include a means of reporting accidents such as the one in Franklin this week or other such delays with various categories. Users could also leave messages forum-style on a particular trip in commenting on the train's lateness, demeanor of the conductors, report incidents and request witnesses, post a lost or found....etc

    In addition to the "contribute" side, also allow users to sign up their phone or an email address at which to be notified for a particular train, line etc. Create a series of quick SMS codes allowing ad hoc queries as well as a profile on the Web site at which to set defaults.

    This solves a myriad of issues including "where the hell is the train" instead of waiting 15-20 minutes after an incident for it to show up on the MBTA marquee. It also could be used similar to plane tracking to allow spouses, significant others, and friends to know when to ACTUALLY leave for the station to pick you up.

    <sarcasm>And oooh - look! There's Google Transit on which data can be displayed and integrated!</sarcasm%gt;

  2. Social Networking - My train ride really isn't that long but it would be nice to be able to know who of my neighbors and friends along the entire Newburyport / Rockport line is taking the same train I am. Allow contributors and participants to state what train they are on, which car they are in and approx. which seat. Information can be set to "friends only" or based on some other trust relationship to allow for privacy.

    Determining train car numbers for such a short period of time may be difficult but I'd venture to guess that they don't change them at all during the day and if one can determine the pattern of which engine/car setup ends up with which train (i.e. my 7:29am 108 train). This would be helpful when arriving at North Station, (South Station etc). I see the 5:55pm/069 train on the board with no track number. However if I know the engine/car numbers for the day, I can simply look at the last car. This could be risky as sometimes the train master may arbitrarily choose trains based on their availability. I think the avid users of Railroad.netcould make a big contribution here.

    This degree of awareness could allow for localized IMing, ad hoc peer-to-peer laptop-based wireless networks and all sorts of experiments in mobile computing. I already see tons of people on their Blackberries, laptops and other devices working away. Why not enhance their value when commuting?

    I forsee a function of my Tivo being a "trusted friend" on my Commutning network so it will always know what train I am on and popup a window allowing any viewers to know when my train reaches the stop before Salem....or something.

As always, I fully admit my ideas are to the extreme of technology implementation and usage but while I strive to be an innovative fururist I am always seeking ways to maximize what little time I have. The more educated with contextualized data I am, the more in control I feel - much more conducive to getting things done.

Please - all comments welcome.

bkdelong: (Default)

I've been playing around with various online calendar and address book services and getting frustrated at having to keep everything in synch with the family paper calendar. I've been peripherally participating in the microformats project which involves contextually marking up date-related information in HTML. It's an incredibly nifty idea especially since several of the major search engines are looking to index this semantically defined information.

Anyway, I was thinking about the developments of ePaper and thin clients - why not create one that has an 802.11a/b/g card, a bluetooth card, has compatibility with vcard and icalendar/vcalendar and the ability to synchronize with various online services through API plugins. The user-interface would be via a stylus of some sort and it would be the size of a calendar except no need to get a new one every year - simply download a set of standards-based calendar "skins", doing whatever the heck you want. Anytime you add an event, it simply uploads it to the service of your choice and stays synchronized with that. If each of the family members have different calendar services of choice then you create accounts with those services and set your calendar to synchronize with all of them.

The same can be done with an address book and contact list - perhaps it can connect and synchronize with all these damn social networks out there as well.

bkdelong: (Default)

Though I oft begrudge pulling my slothic body away from my Net Addiction to walk the dog, (gods forbid that I get fresh air or do something that contributes to the household for a change), it's an exercise that provides me with enough stimuli to keep my normally racing brain busy allowing me to brain storm. Similar to the massive amounts of multitasking I do on the computer except it involves walking, feeling the air, smelling the smells, listening to a cacaphony of noise etc instead of flitting between virtual Windows.

Before I get to my BrainStream of the evening, I wanted to note something my brother Nate, ([livejournal.com profile] necr03), was thinking about. Until he truly starts posting to his own LiveJournal, I want to store his ideas somewhere. He was discussing design of future vehicles. Not sure if he was referring to automobiles, airplanes or spacecraft as it was a few days ago. But he was pondering a day when transmitters and microprocessors were so small and inexpensive, a vehicle could be absolutely covered with them and instead of absorbing or tricking radar, simply retransmit the signal right through itself. and back to the sender once it had bounced back to them.

Taking that a step further, I could see hundreds of microcameras being mounted on a vehicle while at the same time, the external body of the vehicle is one giant computer monitor. The image on the other side of the vehicle is displayed on the opposite side of said vehicle as if it were not there. Difficult to do with tire and the undercarriage though perhaps a bit easier to handle the windows and windshield.

Anyway, back to my idea. I had my little light bulb earlier today while walking the dog. But while completing the final jaunt for the night, I started thinking about holography and the Star Wars scene where R2D2 projects a 3D image of Princess Leia giving a message. I still think holography of that sort is a ways off. We have seen people display similar images on a screen of moving fog but unless scientists get much better control over light in a space, (I don't know- perhaps projecting dust or some sort of matter in the shape of the holograph and the light around it helps make the 3D effect or something), it'll be a long time before we see truly realistic, interactive holograms. Damn - and I wanted to play a magic user and cast Major Image and other Illusory spells.

However with all my ramblings on Augmented Reality, I think that technology is the median between normal-space and holography. As HMDs and wearable HUDs get more and more simple, non-intrusive and inexpensive, using wearable computers combined with these devices along with mounted location-aware hardware one could completely create the fact of an interactive, quite realistic hologram.

I was thinking how much of a PITA hand-mounted keyboards or even dealing with voice/subvocal recognition to navigate a GUI would be. Then I remembered all the research I did into current efforts of "gesture recognition" - that is, detecting hand/body movement and causing a GUI to react as a result. So on a screen in front of you, you would see a series of windows. Hold up our hands and make a grabing movement. The "hand" cursor on the screen would close on the window and allow you to move it where you wish. Hold up a single finger and point in the general direction of the scroll bar, crook your finger and make a scrolling motion - voila, you're moving down the screen.

My idea would become a part of my previously mentioned wearable computing environment - a multi-beam laser scanner mounted on the HMD, (along with the GPS device), that would cascade several hundred to a tousand beams of light parallel to the front of your body. Your hands would then become the input device and you can either bring up a keyboard to type on or simply use your hands to navigate windows combined with your voice as input....or even alternate GUI navigation features. Activating the device would take a voice command or button somewhere on your wearable but once activated, it immediately calibrates and allows you full control.

Examples of gesture recognition navigation of a GUI environment can be seen in the movie "Minority Report" as well as the television show "Earth: Final Conflict" where the technology was used to pilot the Taelon shuttles.

bkdelong: (Default)

A result of my last few posts, I'm going to start posting "idea roundups" on a regular basis. I can save them in draft form and when I'm done for a bit, I will publish them. This is more about me keeping track of my thoughts and brainstorms rather than additional content to readers.

As such, I will post a disclaimer:

  1. The ideas on this page are not complete. I wanted to write them down so I could keep track of them and eventually expand them into my full brainstream. See this post as an example.
  2. I am fully aware that, in some cases, major parts of my ideas will already exist. I am not claiming to be the first to come up with these ideas and in my expanded version, will most likely reference pre-existing efforts.
  3. Anyone can take my idea and run with it. I am big on thought but lacking follow-through. All I ask is that you credit me as per my Creative Commons license and, if possible, invite me to participate or at least observe the project.
  4. Please feel free to post your thoughts and ideas alongside mine. I really encourage you all to join in - it's a lot of fun!

Idea Roundup #1

  1. Google Sky - An add-on to Google Earth. It will allow for graphic displays of the weather from the sky if you were looking at the horizon - kind of like when news stations pull you under the clouds to see it actually snowing. It would also contain or allow for airline tracking, visual representation of the jetstream etc. As per a previous post, you could also see stars, constellations and other objects in the night sky.
  2. Google Mars - As we get more and more data and pictures from the "Red Planet", begin turning them into a full graphical rendering of Mars for Google Earth-like exploration.
  3. Google Universe/Space - An Add-On of sorts to Google Earth, (or perhaps the eventual parent application), this contains all the stars, constellations, comets, meteor showers, sattelite positions, and renderings of other celestial phenomena. It could contain a link to Google Moon as well as Google Mars.
  4. Pooper Scooper Attachment - Not a tech application but an actual product. As someone who walks dogs, often when really tired or in inclimate weather with heavy gloves on or holding an umbrella, picking up droppings bites - I usually use a plastic grocery bag and dispose of it. One idea is for an add-on to those retractable leashes - it resembles the lower-half of a pelican beak. The tip is a bit of a shovel and you easily feed in plastic grocery bags to represent the pelican gullet. Then you click the "stop" on the lead, bend over, scoop with the shovel-tip, tilt it back and it goes into the bag. Pull the bag downwards out of the "pelican beak" and it ties itself into a not.
  5. One-Handed Automatic Pooper Scooper -The second is a bit more complex and futuristic. Somethimes bending over due to an ailing back, old age, or hyper dog is impossible. I see the development of a metal "stick" about 3-5in in length with a button. Press the button and it telescopes downward with a claw of sorts to grab various droppings. You can do this as many times as you want. The waste is pulled inside the tube and deposited into a bag which you can later pull out and throw away. But the key with both scoopers is that it uses or "recycles" plastic bags.
  6. Small, wireless, battery-powered, ink-jet printers - Being overweight and asthmatic, it's a PITA to go up two flights of stairs to the printer every time I print something. Most of my ideas come out of me being lazy anyway. I can't bring my 1.5ft by 1ft laser printer downstairs - no room. So why not a small, rechargable battery powered ink jet that you can connect a wireless card to....easily mountable on a wall through suction cups, a narrow shelf or some other non-intrusive means. When cheap enough, (way in the future), these could be a feature of several rooms in a home so one can print anywhere one would want.
  7. RFID Smart Shower/Bath - When we first moved into our house, I thought it was cute that our cat would meow to be fed via water trickled from the bath faucett. What a pain! Now she wants it multiple times a day and spends most of her time yowling for more. It got me thinking about the RFID "smart" cat and dog doors people hacked together in the past. Why not have a small computer that stores various water preferences via RFID and learns from adjustments? For starters, if my cat hops up on the edge, it would trickle a little water. When she finally leaves, it fully shuts off. No human standing there or leaving the water on wasting some until we come back. This could be expanded in the future to humans via biometrics. I walk into the bathroom and run my hand across the shower sensor and it starts it running getting it to a certain temperature and indicating it's ready. A computer can do it a lot faster and more exact than my fat fingers so I'd save water. Plus if I have to adjust the temperature it can learn my eventual optimum preference. Perhaps the shower would be voice-activated at some point and, feeding into the house sensors determine my body temperature, the bedroom temperature and see how hot I make the shower. Then it can know that days with those conditions, an alternate temperature may be a better place to start.
bkdelong: (Default)

Dammit, dammit, dammit. I need to get better at writing down what's in my head.

When I go out to walk the dogs, am driving somewhere, trying to get off to sleep, I'm usually inside my head brainstorming. I'm pretty good with mind visualization and I swear I could live in there. It's be better than TV if I could connect it to my.....what is it.....visual and aural cortex?

Anyway, in the past year on one of my walks, I spent a lot of time looking at the stars. It was definitely late spring or summer and quite nice out. I was looking up and trying to identify constellations and stars with little success. Being a Scifi geek and always pondering the stars, space and other "star systems" and being a technologist, futurist and pseudo-transhumanist, I'm always thinking of ways to make life easier. The spiritual side of me is constantly fascinated by coorespondences in more Earth and astrological-based religiousness.

So I started dreaming.

Read more... )

Ah, the future. One can hardly wait.

bkdelong: (Default)

Well, apparently I am incredibly far behind with regard to vehicle-to-vehicle communications and intelligent transportation systems. So it's not like I'm the only one thinking up these ideas.

However, it did get me thinking about using Voice Recognition with such systems. I mean, we all abhor folks talking on cell phones while driving - making use of ITS while in motion would be a nightmare.

But if you think of all the systems one could potentially use in the future, with voice recognition - one must wonder how long it would take to continually "train" these systems to understand what you're saying. That's where my idea for Voice Recognition Profiles (VRP) comes in - still looking to see who else has done it.

So when I load up a voice recognition program, I am told to read several lines or paragraphs of text so it can match the text content with my voice. For every program I try, I have to retrain it all over again. In theory, if I move from my computer to my car and try to activate my GPS system by voice, it needs to be trained. If I go to an ATM or drive-thru where one can automatically order by voice, I need to spend several minutes correcting the system until I'm connected with a human operator because the damn thing can't understand me.

Why not create a standard profile for voice recognition that all voice-recognition applications can use? That way, when I come to a new system I need to "train", I just type in my SSN or some other UID which tells the system to pull my VRP (Voice Recognition Profile), out of a centralized directory service, allowing me to immediately use the system with a peak understanding of my voice.

In theory, each time I access a new service using my VRP, whatever actions I take and corrections I make in the process, would be noted in my profile and sent back to the directory service for the next time I access a service - a live, constantly-growing, learning profile.

The futurist in me sees the next step to that being appending a subvocalization profile which would translate the subvocalization signals directly to something that could be used to access various devices around an individual, perhaps an enhanced version of Bluetooth.

Anyone heard of such efforts to develop such a voice profile?

bkdelong: (Default)

So one of my previous BrainStreams has been realized and even has had a conference around it - Vehicle-2-Vehicle communications or V2V.

However with big companies like DaimlerChrysler & GM working on products and research, it seems to me the Web 2.0 community needs to keep close watch on the development of this technology and make sure there's allowances made for people to be able tap directly into the data themselves.

There is light at the end of the tunnel. In a report on a V2V Framework, GM's Bill Ball does say an open infrastrucutre is key to success. Ball is a VP at OnStar.

I haven't read through all the documents from the conference or contacted the presenters yet but I'd really like to see an XML DTD or handful of RDF schemas specific to marking up data gleaned from V2V neworks. I think in order to ensure it won't get locked up in some proprietary or industry-only standard, the Web 2.0/Semantic Web community needs to take ownership and leadership in developing such standards.

Something to keep an eye on is the IEEE DSRC standard - Dedicated Short-Range Communications meant to augment cellular with high-bandwith connectivity for vehicle-to-roadside and vehicle-to-vehicle applications.

Chris ([livejournal.com profile] crschmidt) - this takes MeNow, FOAF and other SemWeb apps to a whole new level

bkdelong: (Default)

With all of the talk of NOLA and surrounding areas being absolutely demolished I can't help but keep thinking what an awesome opportunity for renewable energy solutions this is. Solar is a given - stick panels wherever possible. For an area that gets a lot of storms and wind, why not stick a few Wind Farms both off the coast and inland in various areas?

For an area near the Mississippi and other Rivers as well as the Gulf of Mexico, it is ripe for hydroelectric power plants. Why not also make use of the hydrothermic vents in the Gulf tho, at the moment, they may be too deep.

Turn this important, needed area for oil refinery and shipping into a shining example of clean, renewable energy.

As this is a brainstream of mine, it needs a token "crazy idea". Has anyone thought about using lightning as a renewable energy source? Or do we simply not have the technology to withstand and hold lightning charges yet? I mean, it can't be any worse than calling "nuclear waste" a result of a "clean energy" solution. I mean, what would happen if we relied solely on Nuclear energy? Where would we keep all the waste without effecting the environment, people who currently live in desirable dumping locations, and simply leave those sites be for hundreds of years?

This got sparked as I was trying to come up with a solar/wind solution to keeping various gadgets from flashlights to iPod shuffles to digital cameras charged while headed down South without relying on power needed for relief efforts.

NB: Like any good BrainStream, this is mostly editorial rather than factual otherwise it would be filled with links.

bkdelong: (Default)

I've taken some time yesterday to further organize my thoughts from this post. We needed such an application on 9/11, we need one for Katrina and no doubt as the hurricane seasons get worse and worse and the potential for other natural disasters like blizzards and earthquakes, we need to create such an applications.

I passed some of these ideas onto MoveOn.org, perhaps they'll take the lead but they're a political organization. The US Goverment has already proven that they're to mired in politics and turf wars to execute a successful aid operation and they're always behind in technology. NGOs and other aid organizations don't really have the resources to create such a system so it's up to the technology community.

Not only is this an app that could be used in the US but with open mapping data initiatives springing up around the globe, and platforms like Google Earth to make use of them, this could be used for any number of worldwide natural disasters.

<Open BrainStream>

1) Trust Issues - It may be too late to implement this in a fast matter but between Katrina and 9/11 it may be wise for a massive organized effort to support such a thing - take the "friend of" concept of Orkut or LiveJournal which gives a level of trust to the friend-of-a-friend level. You see, how many people really want to open their home to a stranger with absolutely no context to who they are? Far fewer than if there was some sort of trust system in place.

I've been builging up my contact list on LinkedIn as I personally see it as one of the most successful applications for Social Networks to date. There's about 5 people I haven't directly interacted with online or in person who are on my contact list but all the rest I have some degree of trust with them. So take 78 contacts with their total 5,900+ contacts, and those contacts' contacts equals 367,400+ according to LinkedIn.

Combine that with Orkut's "levels" of trust: friend but 1) haven't met, 2) acquaintence, 3) friend 4) good friend, 5) best friend. I'd almost add a similar level for family: 1) Immediate family 2) Sibling's family 3) Parent's family etc. Honestly, I'd like to see a flexible filter-like system of LiveJournal where perhaps we have a large set of defaults, (like above), add a few more relationship types and allow people to create their own and set trust levels that are translated into "English" for them to verify.

2) Ridesharing - Both due to a friend's need to get a relative from TX to BOS and since many of the homes appearing on hurricanehousing.org are way further out than 100 miles, people either need to get to friends or family or even these hurricanehousing sites.

I see the ridesharing almost like bus/train connections. W can get X to point A and Y can take X to point B and Z can take X to their final destination at point C. Almost like a massive socialist communing network ala Zipcar.

Interesting....who needs gas and oil? Set up rideshares across the country! ;)

3) Google Maps/Earth mashups - Take all the basic location data for both rideshares and hurricanehousing and stick it in a google maps setup for people to see and have access to. This will be very helpful when trying to determine how to get from one Rideshare location to another. Perhaps hack the Google Directions to say ....."to get to Boston, take rideshare A555 from Baton Rouge to rideshare D453 in Kentucky. Take rideshare D453 to Pennsylvania rideshare T887. Take T887 to Boston's South Station.....or whatever.

<Close BrainStream>

We have the tools - I passed these ideas on to the rdfweb group instrumental in developing FOAF and the Google Maps ideas to a geowanking list.....we just need to reach more geek types. They're out there and we can do this.

bkdelong: (Default)
During lunch the past couple of days, I've been working on a way for people to define their political views in a FOAF file or any RDF for that matter. As a result of my first attempt, I've realize that we first need a way to define a series of political districts in which voting for an elected official takes place. I think I've captured that with the below bit of code.

The <Region> tag is the parent for defining all the various
<locale>s in which voting takes place for a person. You define
the localetype - which is just the name for the voting region - the
proper name for the locale via <localename> and the various <localepositions>
via individual <localeposition> entries. Finally, you define
what hierarchial order the locale appears in the governmental structure via
the <regionorder> tag. Most of what you see below is logical
however despite that the Massachusetts Representative Districts can sometimes
include a single city, most voting districts span cross-county hence why I've
listed the "District" localetype above the "County" localetype.



Once I defined my various voting regions, I created an election instance using
dc:date, and started laying out each individual <Vote>
by pairing the localeposition with details of the candidate.
I need to come up with a better way to identify an individual candidate because
I'm not sure foaf:mbox would work. For cases where I was undecided,
I created a <votestatus> tag that defined as such.



Ideally, my political voting region vocabulary could become generalized for
the world and voting informational sites would generate PoliticsRDF data for
people to stick in their FOAF files. If people didn't want to idenfity their
votes until they've voted, a <Vote> item could be converted to
something along the lines of CurrentlySupporting in case ones views
change overtime.



Unfortunately, I haven't created any sort of RDF schemas so I could use some assistance in making this a reality. I don't even know if I have the proper triples setup. Once that is done, we can add tags to define a particular political view and attach a "status" with regard to that view such as "for" or "against".



Comments are welcome - please help me enhance and internationalize this. I think it would be a fascinating way to constantly take polls and the like.Read more... )




[BrainStream]
(Permanent link to this entry)

bkdelong: (Default)

Alright - another brain dump. I'm not sure what's up with me but the ideas keep a-flowin. Bear with me because some of these ideas may have already been done while others are just....weird.



So I've been getting more into my TiVo of late when I realize just how much data there is inside it. Unfortunately, I have a Series 2 TiVo which isn't hack-friendly nor running Open Source. I love my TiVo - not sure what I'd do without it but it's the little things that irritate me:




  • Why is there only the ability to rate movies/shows with 3 thumbs up or 3 thumbs down? Why not a simple rating of 1-10. Anything from 1-4 sucks, 5 is "no opinion" and 6-10 is pretty good to great. I want my TiVo to be able to make good recommendations and I want to give good data to TiVo but I can't with such a limiting rating system.


  • Why can't we rate individual episodes. There are some shows that have a completely different subject each episode, let me decide whether I liked the subject, episode, how it was presented etc. Maybe even have a database of multiple-choice contextual questions based on the show's subject/metadata that helps tell WHY I didn't or did like the show.


  • Why not make the "Advanced Search" easy to access? Why do I have to do a hardware hack just so I can have a Wishlist with both a Director and Actor....or a keyword and actor?


  • If I have some wishlists & season passes set to autorecordbased on preference, let me say "If a more preferred show is a rerun that I have recorded and fully watched once, and will be recorded instead of a less preferred show that is a first run, ASK me if I want to see that instead. More intelligent TV!


  • I want to know why I cannot watch movie trailers on my TiVo. I can listen to music and share photos, but I so want to watch all these cool trailers on my TV - not my computer screen. Wake up TiVo!



So in my desire for all things data, I want to get my data from the source. Why should I have to copy data down? It's in format that's usable as-is or is transformable via XML....so why can't I access it?



My brother is playing with Freevo since he's the *nix wonk in the family and I hope he makes progress. With TiVo being so closed now, it seems building my own Personal Video Recorder (PVR) is the future. I hate to say it, but true. So I'll make the rest of my ideas PVR-generic.



Idea 1: Fun with GPS, RFIDs and Presence



With all this talk and thinking about MeNow, I've started getting more and more interested in GPS. My phone apparently has 911-accessible GPS as well as AGPS where I guess I need some service to assist me in accessing my GPS location. Annoying.



The future of mobile computing is going to be presence-based. Let's say I'm walking down some Main St. USA - could be Broadway in NYC, Newbury St in Boston, or even Essex St. in my town of Salem, MA. Each location I walk by may have some data I'm interested in. This is why I'm perfectly happy to give TiVo my TV-watching data, IMDB my movie watching data, Amazon my media shopping data and ranking, and Google my search data - heck, even my supermarket my shopping history. I want recommendations and special deals. I feel confident enough to handle spam sent my way and want to benefit from the data aggregated about me.



So ideally, I'd have my FOAF file combined with encryption-based "trust-relationships" saying who can and cannot have access to certain data. Ideally this would be linked to MeNow data with a GPS location of where I am at a given time and possibly an RFID/Wireless/Bluetooth identifier that says "I'm here waiting for queries" and may be pushing out all my various user data anonymously. If I walk past a store, I'd get a ping that says "Acme Groceries thinks they have some specials you might be interested in." I can either say.."ok spill it" or "no thanks". There's have to be some sort of robots.txt stating which types of shops I am OK with getting queries from, which ones can outright give me info and which ones are blacklisted.



I'm also annoyed by my favorite store never having large containers of Fat Free Sour Cream (only the small), being out of Green Chilie Enchilada Sauce, Rosemary and never having more than one flavor of Edy's Fat-Free, No Sugar Added Ice Cream. So I can have an "allow" that says let me know stores that have these things in stock.



If I walk past a restaurant, it would look for similar restaurant ratings I have done or dishes that are my favorite and let me know if they have them. A movie theatre may say it still has tickets left for a show I want to see and a music store may let me know they have a CD I've been seeking for ages.



Of course, in a dreamworld this information would be accessible from my home or in my car as well. If I am looking for any of the above I mentioned, my home digital agent can let me know when a favorite or local store has one in stock or if in my car, I can be alerted that there is a store on my route that has what I'm looking for - based on preferences that say I don't want to have to turn onto more than one street after taking an exit from a highway.



In most cases, the data is out there. Almost everything nowadays is done on computers and if people can be educated to save or ask for digital files from companies who produce printed material for them and then have some means of making it available, it would be amazing. Perhaps there's a market for local or regional aggregators who run a series of servers in neighborhoods and communities and have representatives who work with the stores & businesses in those locations to send data to the server where it gets transformed and served.



Ah, pipe dreams.



So where does that fit into PVRs, you might ask? Well, a few things. So I have my GPS coords being sent to my Web/Presence server. Say my wife wants to know when I'm within 15min of home. I can call her when I leave work or see can see my AIM connection go down or notice the "status" on the front page of BrainStream. But traffic in Boston can be unpredictible. Short of calling me every so often, how can she know where I am? That's impractical.



A little jiggering with my FOAF & MeNow plus telling some sort of application or script - if I am <> MeNow:isWith rel:spouseOf, and MeNow:status = "enroute home" then send a note to foaf:person kirkyr, using her MeNow:preferredcontact to let her know when my MeNow:isLocated is within 5 miles or 10-15min of the house. (Which reminds me, I want to add a preferred contact method toMeNow because there are many different ways to contact me at any given time....also, adding some sort of list of common locations to my FOAF would be great.).



There's where the PVR comes in. Kirky's presence app looks at her MeNow and notices her contactPreference is set to "PVR". So the app looks to see if the PVR is actively being used AND if the TV is on - is that possible? Maybe the TV power would have to be plugged into the PVR ala a surge protector like thing. If both these factors are true, it sends a popup window in the bottom corner of the screen much like annoying Network TV ads have been doing during shows lately - and says "Ben is 10-15min from home".



Depending on trust-relationship definitions on my end and detail preferences on her end, it could give her the option to access my GPS coords on a map to see where I am. That way she COULD contact me to ask me to bring somethig home or just start prepping dinner or doing anything else she wanted to do before I got home.



I wonder if anyone's done any work on pulling data from a car's computer. Right now I'm guessing there's few standards and the only pople who can do it are mechanics. It would be cool, though, if once in-car wireless became more prevalent, you could easily pull all sorts of data from your car's computer like speed data, gas consumption, fluid levels etc. Is there an RDF schema for this data? You could factor in the GPS location, add in the restrictions to movement of maps, measure distance, and based on traffic patterns (I can just see an RDF schema for traffic - designate a traffic incident, what caused it, how long will it take to get clear, what is the delay and reduced speed etc) plus the speed the car is going, get the most accurate amount it would take for you to reach a location.



Details aren't too important but ideally a Linux-based PVR would have a small email client/server for sending and receiving notifications or some other means. This alert system could be used for almost anything - checking the weather for severe weather in the area, neighborhood watch alerts for the neighborhood, or some other alert that we've specified via preferences. If there is data, you could be alerted.



But what if the PVR is off? Then it would see the second means of contact is a phone. I'd love to have an application that "called" her phone number, and either played a pre-recorded message from me saying I'm 15min away or read in a text-to-speech voice something getting the message across.

Idea 2: PVR News feeds, other applications



It's getting to the point that the more powerful PVRs become, the more applications that can be built into them. Why not turn them into a terminal server of sorts? Grab my OPML file from my central server and keep my RSS/Atom news feeds up to date so between shows, I can see what the news is - better than any Live News Network Ticker - no need to change the channel. Allow me to set preferences in my PVR alert system so I'm alerted to certain feeds when they are updated. I'd love to get FoodTV or food-oriented feed updates while on FoodTV.



Heck, for that matter, why not let me terminal service into my laptop or IMAP my email account and check email so I don't have to be on a computer. It's only a mater of time before PVRs allow for wireless keyboard/mouse or some other interaction - like gesture-based (think Minority Report) or using one's eyes.



Idea 3: Closed-captioning data



I'm not sure this is even possible as I haven't had time to research. Is it possible to pull close-captioning data from a TV stream using a TV card? There's an incredible amount of potential if so.



For starters, you could get full transcripts for shows - helpful if you wanted to find something later but are not sure what was said or even cooler - use your browser and network access on your PVR to "blog" a show while watching it. You'd be able to quote and cite relevant materal based on text from closed-captions.



What about metadata? The more I get involved in the semantic web, the more I realize just how critical metadata is. Let advanced users go through the closed-captions to attach a series of keywords to shows and then tie that data to recommendations and ratings. If there was a particular thing about a show that interested you, you can rank keywords appropriately- maybe even use that metadata to easily search for stored shows.



The only problem is not everyone uses closed-captioning and speech-to-text that doesn't involve voice training is pretty poor. Something to think abot though - now THAT's interactive TV.



Idea 4: Sharing data



Just like I would REALLY, really like to share all of my "My Movies" lists from the IMDB, I'd love to share data from my TiVo - what movies have I seen recently, what shows do I have a season pass to, what are my top-ranked shows or episodes, what are the worst? Why not make Atom feeds of these lists and use an XSLT to create RSS 0.91, RSS 1.0 and RSS 2.0 feeds that can be pulled. I want to find like-minded people who share my interests ala Audioscrobbler.



Someday, all this preference data aggregation will allow for customized commercials that people will actually watch, so all you network execs should listen.



So in conclusion, presence, presence, presence, metadata, metadata, sharing, sharing, aggregation. It's the future ladies & gents, hopefully it will be here sooner than later.



(Links to come)





[BrainStream]
(Permanent link to this entry)

Profile

bkdelong: (Default)
bkdelong

April 2020

S M T W T F S
   1 234
567891011
12131415161718
19202122232425
2627282930  

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 6th, 2025 05:31 pm
Powered by Dreamwidth Studios