L.A.V.A. Presentation: About Me & My Research

November 5, 2010

This is a post about a presentation I gave at LAVA: Los Angeles Visual Artists, a group of people dedicated to exploring new ways of creating and mixing visuals, VJs/DJs/Artists/Designers.

Overall my research and interest span many categories, but one of the areas I am particularly interested in is real-time interactive systems. Things that people can play with and learn with. Things that react to our input and designing these reactions and interactions with technology and media.

This image represents a novel way of thinking about an interface. In the future interfaces will no longer be static 2D images with drop shadows on our displays. These interfaces will take 3D form and structure. These interfaces will be flexible, morph-able, liquid and intelligent. This is a mock up of a real-time 3D interface to data in a computer. Each rectangle is unique and represents unique data. Additionally the layout is based on relationship from one data file to another. Based on properties, such as data created, nearest neighboring files, etc.

This was an overview of the things I talked about. The image is the same data interface morphed into another structure. Continue after the jump for the rest of the presentation. I go into mathematics, geometry, open sound control, wireless control, mobile interaction, crowd sourcing, real-time multi-user multimodal interfaces, and algorithmic form creation, and real-time synthesized visuals for VJing.

A basic understanding of mathematics goes a long way. Mathematics is at my core, Physics is embedded in my brain, and engineering in my hands. I am a maker, and to make things that push the boundaries, you need to understand how the most basic things work, 2D space, 3D space, points, scalars, vectors, integrals, differentials, gradients, curls, etc. Learn logic, learn math, be methodical, and you too can be a great problem solver.

Okay, so about coordinate systems: they are the simplest way to understand transformation, and functions. These functions or transformations are essential to architecture, math, physics, design, technology, art, etc. In terms of computational geometry, 3D space is a fun playing ground for generative form and algorithmic processes. The image above was created by a program with used computer vision to capture outlines of a person in front of a camera. As time passes these outline portraits move further and further back in space and create a history of the occupation of the space occupied in front of the camera. A space visualization in 4D.

Visualization, its what makes sense out of the data in the image above. We as human do not have time or the capacity to absorb and understand massive amounts of information all at once. We need an overview really, and then a way to dig deeper into what we want to find. Think of it like a micro google…except for things that really aren’t google-able. Data visualization makes sense of things, it reveals patterns,yields new insights into data and the world we live in. Visualization is how I understand complex and simple system. Maybe its because I have a background in design and engineering, but my core instinct when trying to solve a problem or learn something is to draw it out, make the connections, literally, and see how things interconnect. I guess that is why I was addicted to MAXMSP for a while or why I love object oriented programming. Once you understand something you can use it and interconnect it in almost infinite ways.

This is one of my first visualization. Its a 3D visualization of a library data set. The data set contained items that were checked out of a library over the span of one year. I wanted to spatialize the data set, so I could see it all at once, but see how things related to one another. I parsed and mined the data in such a way that the things (books, cd, videos, magazines, etc) checked out of the library were classified by the Dewey Decimal System, which is a knowledge classification system. Then the items that fit a category of knowledge were lumped into a glowie node. Nodes of the same color belong to a bigger category, like The Arts or Science. Then the glowy data points were given life by a physics engine and distributed on a sphere. The numbers represent the quantity of items checked out for that specific node. Relationships and observations are made very quickly using this type of system. The system is alive and interactive, thus begging the viewer to explore and learn. This is how it should be.

For all this awesome interactive visualization and fun to occur, realize that a computer has to update the screen/graphics 60 times a second…well at least 30 frames per second…even today the fastest computers have a hard time calculating complex interactions upon nodes or points. Technology needs to keep advancing, and methods for particle particle interaction need to be optimized. I am still working on learn methods this kind of optimization. But until then, get a good graphics card, write clean code and prey technology will keep making strides forward in computational power.

60 Frames per second in one frame…

This image is a representation of a complex real-time graphic that reacts to audio. The image is very info-graphic like, but that is because it was developed for visualization of a node that would communicate with other nodes around it wireless and transmit data. Another things to note here is that this image/system is the result of combining/allowing many smaller system to interact with each other. Complex systems and behaviors are the result of many simpler systems interacting with each other.

Mobile Phones, the personal computers of today/tomorrow. Every day these devices get smarter, and applications are developed to help augment your life. These devices are hyper-ubiquitous and are deeply embedded in the lives of billions of people all over the world. Learn about this device, its advantages, and how it affects society. Personally I have become more ADD because of devices like these…I need something that knows what I want and keeps me on track, instead of distracting me with a billions applications of tunnels to venture into. After playing with one of these and updating your twitter, facebook, etc, you’ll realize you just need to call someone. But they are great platforms for multi-user interactive installations and making money.

Not quite a mobile phone, but somewhere in between phone and laptop. This is really a mysterious device to me. I want one, but I would get one because its a great platform for performance, art creation, interaction, interface development, mobile media explorations and learning. I am not sure why most common people get these? Is it really comfortable to read? To type on? To watch movies on? To edit photos on? To write code on…oh wait you can’t do that…yet.

Performance, these mobile devices are great for novel methods for performance. This is an image of me VJing in the crowd. This type of interaction is the future. I personally do not like being stuck in one place for more than a couple minutes, unless I am sleeping or on my laptop, in which case I am not really in one place, mentally. Performing with these devices is a tricky challenge because of their small screen size and form factor. Thus novel multi-model methods of interaction with these devices must be explored in the next few years to study how we can fully utilize these devices to augment our lives, our performances and interactions with the real-world and others around us.

Open Sound Control is a protocol for communicating between multimedia devices, such as computers, haptic controllers, electronic instruments, sensors, etc. OSC allows for high resolution signals to be transmitted over a network (LAN or WIFI). This will eventually kill MIDI and ultimately save us all. OSC was originally developed to replace MIDI, it offered advances such as human readable messages, with definable address spaces, multiple types of data transmission, integers, floats, strings, etc. Why this is a big deal? Networking has never been any easier for performance and computer-computer communication or even app-app communication. OSC is easy to understand and easy to code with. Don’t believe me? Try it: http://www.sojamo.de/libraries/oscP5/index.html

Interface Design, my strategy keep it clean, organized and intelligent. Ideally I would want an interface that changed context based on what I am doing or predicts what i will need next. We are close to these interfaces. I believe they are transactive or liquid interfaces. This image is a basic interface layout I use for most my programs. However I would like to see an interface in the future that flocks along and reacts to me.

This is why VJing on the iPhone could work, but really it doesn’t. I can have 4 screens of sliders and buttons and things, but its still very complex to use and there is a delay getting from one control to another. What really needs to happen is an interface that uses orientation, speech in, the proximity sensor and multi-touch gesture to control navigations and controls.

This is a screen shot of an iPhone application that I have developed that has an invisible liquid/transactive interface. It is multi-modal and uses the mic, touches, proximity sensor, accelerometer and orientation to control a system. Very effective in an installation context. The aesthetics are minimal because this application is really a framework in development that I might open-source one day

Well thats all for now. But my interests are constantly changing, yet grounded around making and exploring. Hopefully this has inspired you to focus, learn something and change the world.

Tags: , , , , , , , , , , , , , , , ,