I aim to shape products, interfaces and services that mediate meaningful dialogues between people, systems and their environments within everyday life.

Apr
21
2015

The past month kept me occupied with building a couple of interactive installations: Layered City urban planning game and an augmented reality topographic sandbox. It all finally came together this past weekend.

I collaborated with the Seattle Design Nerds to create two interactive installations for the American Planning Association (APA) conference. The first was a physical community building game made up of five plexiglass panels/layers arranged in such a manner that when viewed from one end then form an entire city. Each panel corresponds to a “layer” of planning: public works, private works, landmarks, transportation, and land use. When participants approach the installation they draw a game card that directs the participant within one of the “layers” to add an individual element of the city (bus, house, bridge, etc.) with a stencil. By using washable window crayons, participants fill in the stencil within the context of its surroundings on the panel and spatially relating the element through other layers. Participants are also invited to created their own cards for each layer with their own directions to others to accommodate items, situations or concepts that we may have missed. In this way the game and installation evolves over the conference.





Our second installation was a digital SimSandbox, an augmented reality topographic simulation using a Kinect and projector. The display is based on the software designed by Oliver Kreylos. By moving around the sand you can create mountain and valley terrains. You can add water by holding your hand up high above the terrain and watch the water flow through the terrain. In this way the system could be used to illustrate topographic changes, erosion, flooding, as well as the effect of sea level rise on world cities.



/>
/>

Mar
03
2015

Microsoft has created a great video envisioning the future of productivity. How could emerging technologies transform the way we collaborate, explore information, work smarter and faster, and move fluidly between work surfaces?

productivity1_590
productivity2_590

Read more here

Oct
06
2014

Weekend hackathons are quite an investment in time and effort but it’s so fun to come up with new ideas and build them out in a 24-hour time crunch. The organizers had a pair of Vuzik Smart Glasses lying around so my friend and I decided to experiment with it and brainstorm some fun ideas to build.

Being an avid jogger (him) and bicyclist (me), we thought wouldn’t it be great if we could record our scenic jogs and rides without having to stop and interrupt our activity? We came up with a simple idea of wearing the smart glasses on the go to easily record photos hands-free using a quick nod of the head to invoke a camera capture. The photo series recorded with your map route can act as your diary to help you remember details of your activities.



May
14
2014

Over the weekend I attended AT&T’s Art+Tech hackathon that challenged participants to combine technology, innovation and art to bring awareness about the decline in bee populations.

Knowing nothing about bees, some of the talks from beekeepers and conservationists taught me some fascinating things about them and made me understand the great importance of bees. Bees not only are the main of our food, they’re clever at communicating exact pollen locations (distance and direction) through dance, and can actually “see” whether a flower is filled with pollen.

The Seattle-Tacoma is home of the Flight Path project, which has turned scrub land into pollinator habitat for bees and in parallel as transformed a corner of the airport concourse for an art and educational exhibit.

In 24 hours, I teamed up with an awesome group of five and we spent all of Friday night brainstorming so many ideas until we finally nailed down our concept after midnight. The next day we got down and dirty. I took on the challenge of creating the animated visualization and decided to use D3.js to make it happen. This was the perfect chance for me to learn D3… but might have proven disastrous dabbling with a new library to build a working prototpye under huge time pressure. I didn’t think I’d make it but luckily I figured everything out and hooked up our Arduino counter to pass in live data for the visualization, hooray!

By the end, my team produced a concept for a live interactive data visualization of the beekeeping operations in the airfield. The goals of our piece was to first attract, then engage and educate the passengers about the plight of the bees.

ATTRACT PHASE
The main display is an animated visualization mimicking a bee hive that represents live bee activity based on data transmitted from Arduino motion detectors in the hives. Each hexagon lights up and fades out as bees enter/exit the hive. A small webcam stream from inside the hive helps connect the real-world activity to the abstracted visualization.

INTERACT
When passersby are attracted to approach the exhibit, motion detectors sensing a viewer standing in front of a screen triggers the INTERACT phase. The flashing visualization turns into animated infographics to tell a compelling story about the decline of bees, which we hope will bring more awareness to the importance of bees and the positive impact the Flight Project for bee conservation. The final display is a live Twitter stream that collects all the “buzz” about the installation to get people to spread the word about the exhibit and about the bees.

The weekend project was a ton of fun. Not only did I meet some great people, and taught myself D3 in less than a day, my team was one of the top 3 winners of the event (a sweet bonus indeed!)

Sep
15
2013

The Microsoft Garage hosted Everyday Data UX Hackathon last month posing the challenge to create an interactive visualization of data that surrounds us in our everyday lives.

We are always producing data; with the pervasiveness of wearables, we can collect so much information about ourselves to monitor, maintain, and improve our health. We looked at existing quantified self tools like Fitbit, Jawbone’s Up and the Nike FuelBand, which all visualize results of users’ actions but in very discrete bits of data that do not provide mid- to long-term perspectives, nor do they allow easy correlations to one to understand how to adapt or change behaviour. What if the data from those sources could be tied into dynamic and reactive visual documents to reflect, in real-time, how adjusting your behaviour could directly impact your life?

We set out to build a visualization that bridges the gulf between reflection and execution. We determined three main problems to tackle in order to achieve this goal: 1) data heterogeneity (how can we combine data of multiple dimensions to make correlations?); 2) macro overview vs micro details (how can we drill in to details and navigate our data points over time?); and 3) emotional design for motivation (how can a visualization motivate people for behavioural change?)

sketches

We explored many visualization methods to combine different dimensions of data. Metrics we were interested in were number of steps, activity level, hours of sleep, calories burned, number of floors climbed, etc. We used radar charts to summarize a person’s activities over a day, where each dimension is measured along a different axis originating from the same point. This produces a unique shape that can be overlaid on previous days’ activities to compare how you’re doing over time or it can be overlaid over a goal shape you’ve set. We also enabled detailed drill-downs to view a timeseries charting a single metric over a longer period of time, which can also be compared with other measures (e.g. calories consumed vs calories burned.)

day_view
multi_views

We also proposed an aesthetic, glance-able lock screen that show’s your day’s abstract shape. Live tiles on your phone can also who you a summary of your day’s progress through an easy to digest visualization.

mobile

Our design struck a fine balance between data art and raw data with a visualization combined with aesthetics that is not only functional in correlating your multiple activities but making it easily understandable and motivating people to adjust their behaviour. For future development, we’d like to look at plugging in different data sources (currently we’re only using Fitbit data), increasing interactivity like enabling manipulation of one dimension to see its impact on another aspect (eg. can increasing step count improve sleep quality), and providing smart inferences to make the data more actionable.

screenshot

Jul
03
2013

Last weekend I attended the AT&T mobile app hackathon, my very first hackathon. Within a 24 hour period, we had to get into teams and build a mobile app for education.

I teamed up with a dynamic and multidisciplinary group to develop an idea for a stuffed teddy bear that teaches kids in a tangible, interactive, and engaging way. By enabling touch, visuals, sound, and voice, in addition to understanding principals of turn-taking necessary for natural conversation and for keeping attention, we created an immersive and entertaining experience for children.

Our proof of concept integrated a mobile device to a teddy bear. The mobile device runs an application which listens to voice input and uses natural language processing to formulate an appropriate verbal response using text-to-speech. We developed a small library of educational lessons like recognizing colours, learning the alphabet and listening to animal sounds as well as little game rewards to have Teddy tell a joke or sing a song.

For the mobile device I designed simple and fun UI to provide helpful visuals to accompany the audio and ongoing conversation between teddy and child. To manage the child’s learning progress and lesson plans, an admin UI was created for the parent or teacher to choose the education level and the types of lessons/games, as well as track the child’s progress over time.

interface

By the end of the 24-hour period, all teams went up to do a show-and-tell of their prototype. We hit a couple of hiccups with the voice recognition in our demo, but it was really well received. We ended up being the grand prize winner – not too shabby for a bad first-time hacking experience!

talking_teddy

Sep
05
2012

I attended O’Reilly’ Strata “Making Data Work” conference in Santa Clara, California this past February. I summarized a few of the interesting data visualization sessions I attended in a design lunch and learn presentation for my company. And now I’m getting around to posting it up for your viewing pleasure.


Jul
20
2012

After some intense weeks of late nights and hard work, the Stanford Human-Computer interaction course I took through Coursera has come to an end. This was a great way to motivate myself to work on my own personal project and after 5 weeks of immersing myself in the whole end-to-end design process, from user research and observations all the way to learning jQuery Mobile for my design implementation and conducting user evaluations. I came out developing a prototype for a mobile biking app designed to encourage and guide urban exploration. It’s still in a very rough stage at the moment and I’ve gleaned some valuable feedback from user evaluations that will require some big design changes. This has become a pet project that I intend to carry on after the course.

This course offering was an experimental launch for such a design course in an online format so there were some hiccups and a few things that could be improved on, but overall it was quite successful. I especially enjoyed the peer assessments for each assignment as it allowed you to see what ideas other students were working on and to receive constructive criticism and feedback for your own project. Professor Scott Klemmer was a great instructor and he plans to offer it again later on with improvements based on feedback and what they had learned from this first round. So, if you’re interested in HCI or UI design or even if you’re familiar with the concepts already, it’s worthwhile to try this course out.

Jun
01
2012

Take Part created an engaging and powerful story of how a virus spreads from one person to another to erupt into a pandemic. Both educational and humourous, it teaches the public how we can avoid massive outbreaks through the act of simply washing our hands regularly.

via Brain Pickings

Feb
10
2012

I just downloaded the AntiMap Log iPhone app to try out on my next snowboarding trip. The mobile app allows you to record your own data in real time as you are out and about, whether it be mountain biking, skiing, running or driving. Collected data such as latitude, longitude, compass direction, speed, distance, and time, can then be analyzed and visualized with a suite of AntiMap tools: AntiMap Simple and AntiMap Video.

Originally created as a snowboarding/ski application, AntiMap Video syncs riders’ video footage with real-time stats, giving an impression of a video game:

AntiMap Simple is an HTML5/Processing visualization for the log data. The visualization below is for the same snowboarder. AntiMap describes the visualization:

Circles are used to visualise the plotted data. The color of each circle is mapped to the compass data (0˚ = black, 360˚ = white), and the size of each circle is mapped to the speed data (bigger circles = faster). The same data used in this demo, was used in the AntiMap Video snowboarding application. You can see from the visualisation, during heelside turns (left) the colours are a lot whiter/brighter than toeside turns (right). The sharper/more obvious colour changes indicate either sudden turns or spins (eg. the few black rings right in the centre).

Jan
18
2012

Here’s a really amusing illustration of how you can prove misleading statements by simply putting 2 graphs together. I especially loved Fig. 6.

via Fast Company

Dec
10
2011

Last weekend I attended Spotlight HTML5 held at U of T covering an interesting range of HTML5-related topics like geolocation, semantic tagging structure, back-end canvas drawing, CSS3, interactive web video, and polyfills. Speakers came from Teehan+Lax Labs, Microsoft, Adobe, and AOL.

The talk about CSS3 was pretty exciting as it highlighted some new features you can now do on the web that couldn’t have been done in the past. The big advantages of CSS3 are better search engine placement from the use of real text, increased page performance, better usability and accessibility, optimized styles, and the ability to draw and animate elements.

A topic that continually came up throughout the various talks was the concept of responsive design, in which the layout of the content adapts to the device/media you are using. Greg Rewis in fact, stresses that browsing experiences should not be the same across different platforms and resolutions. The CSS3 specifications now includes media queries to target not only specific devices but physical characteristics of the devices like screen width and resolution. CSS3 also introduced some new background specifications; background-size is of particular interest, especially from an accessibility perspective. This property lets you specify the size of the background image, either as a fixed value or relative to the background positioning area. It doesn’t sound particularly interesting so far — but say you use background images for text menu items and your users need to bump up the text size for easier reading, the background images would scale WITH the larger text sizes. You end up with an elegant and flexible UI where the text doesn’t look like they’ve broken out of the confines of static images. A great example of this is the Fresh Picked Design site:

The CSS3 talk was only one of many interesting presentations that day, but the other presentation slides can be found on the FITC site. For me, the conference was a great introduction to the new features and specifications enabled by HTML5 and CSS3 that will provide some inspiration for my future designs on the web.

Dec
02
2011

Last night I gave a talk on a UX education panel hosted by IxDA Toronto. I was joined by 3 other panelists: a college new media instructor, a senior creative director of a large design agency, and an interaction designer/educator at a local design studio. It was really interesting to discuss the diverse paths people followed to end up in the UX field. Formal education backgrounds ranged from computer science and information/library science to fine art and design, while others were self-taught and learned on the job.

User experience or interaction design is such a multi-faceted discipline that you need to build a foundation of skills ranging from the creative to the technical and analytical. There is no “one size fits all” educational path.

Nov
11
2011

As a reaction to Microsoft’s recent future vision video, software engineer (and a former concept designer at Apple) Bret Victor wrote a fantastic post entitled “A Brief Rant On The Future Of Interaction Design.”

Victor rants that this future vision is not visionary at all. It focuses too much on screen interaction, which is is not that much different from our experience with our current devices. Case in point, look at all these ‘future’ interactions in Microsoft’s concept:

Each one of these scenes involves a flat screen. Yet, Victor also points out (and passionately so) that each interaction touchpoint involves the use of… hands! As humans, we have not only our fingers but our hands, arms and entire bodies that enable us to manipulate and interact with the natural world and to understand the tactile feedback we receive in return. So why should we be limited to finger pointing on a screen?

He illustrates the many ways in which we can use our hands to manipulate things that we could not possibly express via screen-based interactions:

Rather than limiting people to finger tapping/swiping, we should be inspired by our own human capabilities to design and enable a richer and more expressive interaction with our future tools.

Despite how it appears to the culture at large, technology doesn’t just happen. It doesn’t emerge spontaneously, like mold on cheese. Revolutionary technology comes out of long research, and research is performed and funded by inspired people.

And this is my plea — be inspired by the untapped potential of human capabilities. Don’t just extrapolate yesterday’s technology and then cram people into it. […] Pictures Under Glass is old news. Let’s start using our hands.

Victor ends with a question that nicely sums up his entire point:

With an entire body at your command, do you seriously think the Future Of Interaction should be a single finger?

Aug
30
2011

IDEO Labs put together a collaborative and non-linear visual story inspired by the exquisite corpse style of storytelling:

The exquisite corpse model is rooted in the surrealist movement, and we are inspired by how many experiments currently in public domain play with its framework (or lack thereof). Our take on the model—in which we essentially asked a group of collaborators to submit sentences/fragments—was to create a dynamic visualization for the “exquisite” story our writers had crafted. These collective fragments formed a base on which we layered sensory artifacts, from voice-over to tagged visuals, and we were curious as to how far we could take the experience.

They asked 150 people to submit a Twitter-length sentence. Using those fragments, they compiled a 1600-word story narrated by a single voice and illustrated it with images from Flickr linked to key words.

via Core77

Aug
17
2011

Combining aspects of Lego, video game, and board games, Sifteo Cubes are a new way to play. The prototype concept was introduced in a 2009 TED talk by David Merrill, and now these interactive wireless blocks are coming to market. Showcasing innovating interaction design, these 1.5″-inch cubes with full colour screens are motion- and context-aware allowing players to shake, tilt, jolt, rotate, slide and click to affect neighbouring tiles.

They pioneer something the company calls “Intelligent Play,” which is a vaguely elevated term for a toy that manages to be both fun and smart. They’re video games for people who hate video games. […] “We’re not trying to compete with Nintendo, Microsoft, EA and others,” Sifteo spokesman Paul Doherty tells Co.Design. “We’re trying to create games that promote learning, spatial reasoning and truly interactive play.”

See the Sifteo cubes in action:

via Co.Design

Aug
12
2011

A simple ring around a tree acts as a new space for kindergarten children to learn and play. The idea of using senses and bodily movement as tools for learning inspired the design:

The preferred space for teaching preschool children avoids the classical dynamics of frontal lectures. In “Philosophical Investigations,” Ludwig Wittgenstein writes that what children and foreigners have in common is the absence of knowledge of language and a set of codified rules. This leads them—in the first instance—to learn through the senses and the body. To give the children more freedom to move around the school, the directors of the Fuji Kindergarten requested Tezuka to design spaces without furniture: no chairs, desks or lecterns. As a result, “Ring Around a Tree” offers an architecture where there are no measures taken to constrain space, in order to liberate the body.

The Japanese Zelkova tree had already been a “place-playmate” for several generations serving as a treehouse, temporary shelter, and climbing area before being transformed as an addition to the Fuji Kindergarten.

Looking back on my own experience, the staircase and balcony of my childhood home was a playmate for my sisters and I. In addition to functioning simply as a connection between floors, it became an area for us and our friends to slide down and climb, listen to story time and to put on puppet shows. What was your place playmate?

Aug
11
2011

Such a wonderful film. Makes me excited for my next travel adventure.

MOVE from Rick Mereki on Vimeo.

3 guys, 44 days, 11 countries, 18 flights, 38 thousand miles, an exploding volcano, 2 cameras and almost a terabyte of footage… all to turn 3 ambitious linear concepts based on movement, learning and food ….into 3 beautiful and hopefully compelling short films…..

= a trip of a lifetime.

move, eat, learn

Rick Mereki : Director, producer, additional camera and editing
Tim White : DOP, producer, primary editing, sound
Andrew Lees : Actor, mover, groover

Aug
10
2011

George Kokkinidis highlights the variety of user interfaces on multi-touch tablets by photographing the resulting fingerprints on an iPad surface after using different applications.

The differences are highlighted by the quality, location, and quantity of the taps and swipes, displaying the unique interactions required by each application and providing a narrative of how a certain application was used.

Read Kokkinidis’ blog entry here.

Aug
09
2011

One of the best things I love about New York City is its brilliant use of urban space to engage the public.

On my recent trip to NYC, I had to re-visit the High Line, a revitalization project transforming the elevated rail line into an innovative public park and space for exploration, interaction, and art installations. This summer, the High Line opened the new section 2 extension that lead to a new public plaza below called The Lot. To my delight I encountered Rainbow City, a whimsical playground filled with giant colourful balloon sculptures (including a bouncy castle) inviting both children and adults and to play.

The installation has since been taken down, but now in its place is another great idea: an open air rollerskating rink. Wonderful inspiration for other urban cities.



Aug
09
2011

I came across a nice article on Smashing Magazine outlining some design guidelines for optimizing performance on mobile devices. Performance plays an important part in creating a valuable, enjoyable and trustworthy experience for the user, that will encourage users to continue using your application/product. Not only does the application need to look amazing, it needs to feel and work amazing as well.

The seven guidelines are as follows:

  • Define UI brand signatures
  • Focus the portfolio of products
  • Identify the core user stories
  • Optimize UI flows and elements
  • Define UI scaling rules
  • Use a performance dashboard
  • Champion dedicated UI engineering skills

For example, front-end design can help speed up the perceived performance of a back-end delay by providing intermediary steps displaying the load progress (showing loading animations, text content, etc.) This creates the impression to the user that the system is progressing through various steps rather than experiencing a delay if they simply jumped from screen 1 to 4 as illustrated below.

Read the article here.

May
11
2011

Having worked on several projects in the mobile space over the past year, I’m completely drawn to the site Lovely UI, which showcases inspiring mobile user interfaces.

Other resources that serve as good references are Mobile Design Patterns (iOS) and Android Patterns.

May
05
2011

Two wonderful videos explaining user experience design. Now I can show this to my family and friends whenever they ask me what I do.

Who doesn’t love a good UX design, and who doesn’t get totally frustrated with bad experience design.
Hail to all the great UX designers of the world. Spread the love for UX design !!!

ILUVUXDESIGN part I from lyle on Vimeo.

ILUVUXDESIGN part II from lyle on Vimeo.

Mar
16
2011

MIT Media Lab’s Tangible Media Group have developed Recompose, an experimental touch interface that provides tactile feedback.

Recompose is a new system for manipulation of an actuated surface. By collectively utilizing the body as a tool for direct manipulation alongside gestural input for functional manipulation, we show how a user is afforded unprecedented control over an actuated surface.

Made up of motorized tiles that pop up/down, the 3D interface can be directly manipulated by pressing down on the tiles or simply using gestures by waving your had over various areas of the surface, which move in response to your input. The feedback is a 3D visualization of the user’s physical interaction with the tiles. A camera and projector, combined with computer vision are used to recognize and understand the language of the physical interactions.

via Fast Company

Feb
08
2011

Ever since I started taking culinary arts classes back in the fall, I’ve developed an appreciation for beautiful and well-crafted kitchen tools. Heck, I even stroll through Williams Sonoma for fun.

Pop Chart Lab made a detailed mapping of over 100 kitchen implements. I love the visual language in this poster and I learned about some interesting new tools.

via Fast Company


Instagram