I start with the fact that I really want to do something with translate human behavior or human input in to simulate the nature reality like weather.
My fist idea was to make a heart shape device in hand, to detect the pulse, heart beat, temperature etc, then the device attach with a cloud shape that can actually show those input data, like when the person get excited, the cloud can rain, thunder sound etc.
Then I tried to develop my idea a little more, instead of pulse input, I want the cloud to be mood-related, so I thought about speaking input.
Here is my second idea, that make a Cloud Therapy. Inspired by the suicide hotline on the bus. Like going under a actual therapy, there will be a sofa, a telephone linked with the cloud, and when the user is talking, the cloud will respond with lighting or sound, and also output relaxing music from the telephone speaker. And also, a output ICM graphic work on the screen that made of mist under the cloud.
That’s a lot, I know.
After talking with Tom with this idea, Tom pointed out that how to let the user have the need to talk with a cloud, that really made me think.
Tom was also really nice to show me a clip from Close Encounters of the Third Kind , it showed how the interaction were going between the human and alien shape, playing classical music back and forth, sort of like Simon Says, a project I did for mid-term.
This is cute. But this is not really the interaction I hope to connect the cloud and human…
Then I had this wonderful talk with Jingwen, she’s so great and resourceful, and also was kind enough to provide me so many inspiration works.
Here is my (sort of ) final idea.
One of the changes I made from the second plan that I would like to point out is that, I made the telephone in the middle, so the message is more straight-forward : talk with the cloud, if you pick the phone up.
Here is how it (is supposed to) works:
When the motion sensor detect the people, the ring will rings, normal people would be triggered to pick up the phone. Then the voice instruction will be guiding the rest of part. Which can be seen from the work flow. Pressed different button on the phone to record, listen to memory (well, I have not really decided on this part). And the cloud will response(of course) to the speaking base on the emotion mapped, as a output like lighting frequency, light brightness, color, extra sound etc.
What I like about this idea, is to visualize “memory cloud”, and make something that used to be a tech thing into something human and warm.( for a little??)
I felt I should still have a little time to develop the idea. I absolutely LOVE THIS messy brainstroming. Let’s see after tomorrow’s playback.
Since I may not have a partner so timing is also a issue. But I’m excited about doing the whole thing by myself. Exspecially the coding and fab part. YEAHHHHHHHH
What I learn from this so far is to, always, always talk about your idea to other people. You never know what you gonna learn from other people’s perspective.
And I think I will book more office hour…
Special thanks to Tom, Jingwen, Hayeon, Yeseul.
Here is a rough timeline and DOM
task and timing
Main to do thing:
Hack the phone
ICM part -library
11.8 make the plan
11.9 finalize the idea, make the order of parts
11.10 -11.17 work with the library, voice input, working on the mapping strategy and circuit on the same time
11.18 – 11.25 combining ICM+PCom
HACK THE PHONE
11.26 – 11.31 fab
11.31 – 12.8 refining
12.8-12.14 buffering time
- Cloud material .
- LED strip
- Light string (see if I can hack them)
- Telephone (hack it)