I aim to shape products, interfaces and services that mediate meaningful dialogues between people, systems and their environments within everyday life.
Last weekend I attended the AT&T mobile app hackathon, my very first hackathon. Within a 24 hour period, we had to get into teams and build a mobile app for education.
I teamed up with a dynamic and multidisciplinary group to develop an idea for a stuffed teddy bear that teaches kids in a tangible, interactive, and engaging way. By enabling touch, visuals, sound, and voice, in addition to understanding principals of turn-taking necessary for natural conversation and for keeping attention, we created an immersive and entertaining experience for children.
Our proof of concept integrated a mobile device to a teddy bear. The mobile device runs an application which listens to voice input and uses natural language processing to formulate an appropriate verbal response using text-to-speech. We developed a small library of educational lessons like recognizing colours, learning the alphabet and listening to animal sounds as well as little game rewards to have Teddy tell a joke or sing a song.
For the mobile device I designed simple and fun UI to provide helpful visuals to accompany the audio and ongoing conversation between teddy and child. To manage the child’s learning progress and lesson plans, an admin UI was created for the parent or teacher to choose the education level and the types of lessons/games, as well as track the child’s progress over time.
By the end of the 24-hour period, all teams went up to do a show-and-tell of their prototype. We hit a couple of hiccups with the voice recognition in our demo, but it was really well received. We ended up being the grand prize winner – not too shabby for a bad first-time hacking experience!
I attended O’Reilly’ Strata “Making Data Work” conference in Santa Clara, California this past February. I summarized a few of the interesting data visualization sessions I attended in a design lunch and learn presentation for my company. And now I’m getting around to posting it up for your viewing pleasure.
After some intense weeks of late nights and hard work, the Stanford Human-Computer interaction course I took through Coursera has come to an end. This was a great way to motivate myself to work on my own personal project and after 5 weeks of immersing myself in the whole end-to-end design process, from user research and observations all the way to learning jQuery Mobile for my design implementation and conducting user evaluations. I came out developing a prototype for a mobile biking app designed to encourage and guide urban exploration. It’s still in a very rough stage at the moment and I’ve gleaned some valuable feedback from user evaluations that will require some big design changes. This has become a pet project that I intend to carry on after the course.
This course offering was an experimental launch for such a design course in an online format so there were some hiccups and a few things that could be improved on, but overall it was quite successful. I especially enjoyed the peer assessments for each assignment as it allowed you to see what ideas other students were working on and to receive constructive criticism and feedback for your own project. Professor Scott Klemmer was a great instructor and he plans to offer it again later on with improvements based on feedback and what they had learned from this first round. So, if you’re interested in HCI or UI design or even if you’re familiar with the concepts already, it’s worthwhile to try this course out.
Take Part created an engaging and powerful story of how a virus spreads from one person to another to erupt into a pandemic. Both educational and humourous, it teaches the public how we can avoid massive outbreaks through the act of simply washing our hands regularly.
via Brain Pickings
I just downloaded the AntiMap Log iPhone app to try out on my next snowboarding trip. The mobile app allows you to record your own data in real time as you are out and about, whether it be mountain biking, skiing, running or driving. Collected data such as latitude, longitude, compass direction, speed, distance, and time, can then be analyzed and visualized with a suite of AntiMap tools: AntiMap Simple and AntiMap Video.
Originally created as a snowboarding/ski application, AntiMap Video syncs riders’ video footage with real-time stats, giving an impression of a video game:
AntiMap Simple is an HTML5/Processing visualization for the log data. The visualization below is for the same snowboarder. AntiMap describes the visualization:
Circles are used to visualise the plotted data. The color of each circle is mapped to the compass data (0˚ = black, 360˚ = white), and the size of each circle is mapped to the speed data (bigger circles = faster). The same data used in this demo, was used in the AntiMap Video snowboarding application. You can see from the visualisation, during heelside turns (left) the colours are a lot whiter/brighter than toeside turns (right). The sharper/more obvious colour changes indicate either sudden turns or spins (eg. the few black rings right in the centre).
Here’s a really amusing illustration of how you can prove misleading statements by simply putting 2 graphs together. I especially loved Fig. 6.
via Fast Company
Last weekend I attended Spotlight HTML5 held at U of T covering an interesting range of HTML5-related topics like geolocation, semantic tagging structure, back-end canvas drawing, CSS3, interactive web video, and polyfills. Speakers came from Teehan+Lax Labs, Microsoft, Adobe, and AOL.
The talk about CSS3 was pretty exciting as it highlighted some new features you can now do on the web that couldn’t have been done in the past. The big advantages of CSS3 are better search engine placement from the use of real text, increased page performance, better usability and accessibility, optimized styles, and the ability to draw and animate elements.
A topic that continually came up throughout the various talks was the concept of responsive design, in which the layout of the content adapts to the device/media you are using. Greg Rewis in fact, stresses that browsing experiences should not be the same across different platforms and resolutions. The CSS3 specifications now includes media queries to target not only specific devices but physical characteristics of the devices like screen width and resolution. CSS3 also introduced some new background specifications; background-size is of particular interest, especially from an accessibility perspective. This property lets you specify the size of the background image, either as a fixed value or relative to the background positioning area. It doesn’t sound particularly interesting so far — but say you use background images for text menu items and your users need to bump up the text size for easier reading, the background images would scale WITH the larger text sizes. You end up with an elegant and flexible UI where the text doesn’t look like they’ve broken out of the confines of static images. A great example of this is the Fresh Picked Design site:
The CSS3 talk was only one of many interesting presentations that day, but the other presentation slides can be found on the FITC site. For me, the conference was a great introduction to the new features and specifications enabled by HTML5 and CSS3 that will provide some inspiration for my future designs on the web.
Last night I gave a talk on a UX education panel hosted by IxDA Toronto. I was joined by 3 other panelists: a college new media instructor, a senior creative director of a large design agency, and an interaction designer/educator at a local design studio. It was really interesting to discuss the diverse paths people followed to end up in the UX field. Formal education backgrounds ranged from computer science and information/library science to fine art and design, while others were self-taught and learned on the job.
User experience or interaction design is such a multi-faceted discipline that you need to build a foundation of skills ranging from the creative to the technical and analytical. There is no “one size fits all” educational path.
As a reaction to Microsoft’s recent future vision video, software engineer (and a former concept designer at Apple) Bret Victor wrote a fantastic post entitled “A Brief Rant On The Future Of Interaction Design.”
Victor rants that this future vision is not visionary at all. It focuses too much on screen interaction, which is is not that much different from our experience with our current devices. Case in point, look at all these ‘future’ interactions in Microsoft’s concept:
Each one of these scenes involves a flat screen. Yet, Victor also points out (and passionately so) that each interaction touchpoint involves the use of… hands! As humans, we have not only our fingers but our hands, arms and entire bodies that enable us to manipulate and interact with the natural world and to understand the tactile feedback we receive in return. So why should we be limited to finger pointing on a screen?
He illustrates the many ways in which we can use our hands to manipulate things that we could not possibly express via screen-based interactions:
Rather than limiting people to finger tapping/swiping, we should be inspired by our own human capabilities to design and enable a richer and more expressive interaction with our future tools.
Despite how it appears to the culture at large, technology doesn’t just happen. It doesn’t emerge spontaneously, like mold on cheese. Revolutionary technology comes out of long research, and research is performed and funded by inspired people.
And this is my plea — be inspired by the untapped potential of human capabilities. Don’t just extrapolate yesterday’s technology and then cram people into it. [...] Pictures Under Glass is old news. Let’s start using our hands.
Victor ends with a question that nicely sums up his entire point:
With an entire body at your command, do you seriously think the Future Of Interaction should be a single finger?
IDEO Labs put together a collaborative and non-linear visual story inspired by the exquisite corpse style of storytelling:
The exquisite corpse model is rooted in the surrealist movement, and we are inspired by how many experiments currently in public domain play with its framework (or lack thereof). Our take on the model—in which we essentially asked a group of collaborators to submit sentences/fragments—was to create a dynamic visualization for the “exquisite” story our writers had crafted. These collective fragments formed a base on which we layered sensory artifacts, from voice-over to tagged visuals, and we were curious as to how far we could take the experience.
They asked 150 people to submit a Twitter-length sentence. Using those fragments, they compiled a 1600-word story narrated by a single voice and illustrated it with images from Flickr linked to key words.
Combining aspects of Lego, video game, and board games, Sifteo Cubes are a new way to play. The prototype concept was introduced in a 2009 TED talk by David Merrill, and now these interactive wireless blocks are coming to market. Showcasing innovating interaction design, these 1.5″-inch cubes with full colour screens are motion- and context-aware allowing players to shake, tilt, jolt, rotate, slide and click to affect neighbouring tiles.
They pioneer something the company calls “Intelligent Play,” which is a vaguely elevated term for a toy that manages to be both fun and smart. They’re video games for people who hate video games. [...] “We’re not trying to compete with Nintendo, Microsoft, EA and others,” Sifteo spokesman Paul Doherty tells Co.Design. “We’re trying to create games that promote learning, spatial reasoning and truly interactive play.”
See the Sifteo cubes in action:
A simple ring around a tree acts as a new space for kindergarten children to learn and play. The idea of using senses and bodily movement as tools for learning inspired the design:
The preferred space for teaching preschool children avoids the classical dynamics of frontal lectures. In “Philosophical Investigations,” Ludwig Wittgenstein writes that what children and foreigners have in common is the absence of knowledge of language and a set of codified rules. This leads them—in the first instance—to learn through the senses and the body. To give the children more freedom to move around the school, the directors of the Fuji Kindergarten requested Tezuka to design spaces without furniture: no chairs, desks or lecterns. As a result, “Ring Around a Tree” offers an architecture where there are no measures taken to constrain space, in order to liberate the body.
The Japanese Zelkova tree had already been a “place-playmate” for several generations serving as a treehouse, temporary shelter, and climbing area before being transformed as an addition to the Fuji Kindergarten.
Looking back on my own experience, the staircase and balcony of my childhood home was a playmate for my sisters and I. In addition to functioning simply as a connection between floors, it became an area for us and our friends to slide down and climb, listen to story time and to put on puppet shows. What was your place playmate?
Such a wonderful film. Makes me excited for my next travel adventure.
3 guys, 44 days, 11 countries, 18 flights, 38 thousand miles, an exploding volcano, 2 cameras and almost a terabyte of footage… all to turn 3 ambitious linear concepts based on movement, learning and food ….into 3 beautiful and hopefully compelling short films…..
= a trip of a lifetime.
move, eat, learn
Rick Mereki : Director, producer, additional camera and editing
Tim White : DOP, producer, primary editing, sound
Andrew Lees : Actor, mover, groover
George Kokkinidis highlights the variety of user interfaces on multi-touch tablets by photographing the resulting fingerprints on an iPad surface after using different applications.
The differences are highlighted by the quality, location, and quantity of the taps and swipes, displaying the unique interactions required by each application and providing a narrative of how a certain application was used.
Read Kokkinidis’ blog entry here.
One of the best things I love about New York City is its brilliant use of urban space to engage the public.
On my recent trip to NYC, I had to re-visit the High Line, a revitalization project transforming the elevated rail line into an innovative public park and space for exploration, interaction, and art installations. This summer, the High Line opened the new section 2 extension that lead to a new public plaza below called The Lot. To my delight I encountered Rainbow City, a whimsical playground filled with giant colourful balloon sculptures (including a bouncy castle) inviting both children and adults and to play.
The installation has since been taken down, but now in its place is another great idea: an open air rollerskating rink. Wonderful inspiration for other urban cities.
I came across a nice article on Smashing Magazine outlining some design guidelines for optimizing performance on mobile devices. Performance plays an important part in creating a valuable, enjoyable and trustworthy experience for the user, that will encourage users to continue using your application/product. Not only does the application need to look amazing, it needs to feel and work amazing as well.
The seven guidelines are as follows:
For example, front-end design can help speed up the perceived performance of a back-end delay by providing intermediary steps displaying the load progress (showing loading animations, text content, etc.) This creates the impression to the user that the system is progressing through various steps rather than experiencing a delay if they simply jumped from screen 1 to 4 as illustrated below.
Having worked on several projects in the mobile space over the past year, I’m completely drawn to the site Lovely UI, which showcases inspiring mobile user interfaces.
Two wonderful videos explaining user experience design. Now I can show this to my family and friends whenever they ask me what I do.
Who doesn’t love a good UX design, and who doesn’t get totally frustrated with bad experience design.
Hail to all the great UX designers of the world. Spread the love for UX design !!!
MIT Media Lab’s Tangible Media Group have developed Recompose, an experimental touch interface that provides tactile feedback.
Recompose is a new system for manipulation of an actuated surface. By collectively utilizing the body as a tool for direct manipulation alongside gestural input for functional manipulation, we show how a user is afforded unprecedented control over an actuated surface.
Made up of motorized tiles that pop up/down, the 3D interface can be directly manipulated by pressing down on the tiles or simply using gestures by waving your had over various areas of the surface, which move in response to your input. The feedback is a 3D visualization of the user’s physical interaction with the tiles. A camera and projector, combined with computer vision are used to recognize and understand the language of the physical interactions.
via Fast Company
Ever since I started taking culinary arts classes back in the fall, I’ve developed an appreciation for beautiful and well-crafted kitchen tools. Heck, I even stroll through Williams Sonoma for fun.
Pop Chart Lab made a detailed mapping of over 100 kitchen implements. I love the visual language in this poster and I learned about some interesting new tools.
via Fast Company