Physical Computing is an approach to computer-human interaction design that starts by considering how humans express themselves physically. Computer interface design instruction often takes the computer hardware for given — namely, that there is a keyboard, a screen, speakers, and a mouse or trackpad or touchscreen — and concentrates on teaching the software necessary to design within those boundaries. In physical computing, we take the human body and its capabilities as the starting point, and attempt to design interfaces, both software and hardware, that can sense and respond to what humans can physically do.
Starting with a person’s capabilities requires an understanding of how a computer can sense physical action. When we act, we cause changes in various forms of energy. Speech generates the air pressure waves that are sound. Gestures change the flow of light and heat in a space. Electronic sensors can convert these energy changes into changing electronic signals that can be read and interpreted by computers. In physical computing, we learn how to connect sensors to the simplest of computers, called microcontrollers, in order to read these changes and interpret them as actions. Finally, we learn how microcontrollers communicate with other computers in order to connect physical action with multimedia displays.
Physical computing takes a hands-on approach, which means that you spend a lot of time building circuits, soldering, writing programs, building structures to hold sensors and controls, and figuring out how best to make all of these things relate to a person’s expression.
Cool. So we’ll build all kinds of robots?
Not quite. While the hardware skills used in physical computing are similar to those used in robotics, the concepts are a bit different. When you build robots, you’re usually focused on making devices that are autonomous, capable of navigating through the world on its own. Physical computing systems, in contrast, focus on interaction with the human. Rather than automation, we focus on using digital technologies to extend human capabilities, creating systems that are driven by a person’s intentions, decisions and actions. Where a robotics course might focus on the mechanics, drive and sensing systems of a robot, a physical computing course might concentrate more on the interface, both hardware and software, necessary for a human to direct that robot.
What will I learn in this class, and what should I know in advance?
There are three broad areas you’ll learn about in this course:
- you’ll get an introduction of microcontroller electronics, in order to understand how sensors and actuators work and how they are controlled by computers;
- you’ll learn the rudiments of programming microcontrollers, and how to interface them to other computers via serial communication;
- you’ll learn how to think about physical interaction design starting with observation of what the user physically does and then planning the best ways to sense and respond to that action.
This course assumes no prior knowledge of any of these subjects, but it does require a lot of out-of-class time and effort. Most of the real work happens outside of class, both in the shop building and programming, and in the world observing people to understand how their actions reflect their intentions.
You don’t need any prior background in electronics for this course. You’ll learn just enough in this class to connect a variety of sensors and actuators to a microcontroller so that you can realize your ideas.
This isn’t primarily an electronics course or a programming course or a design course. Just as there are complementary courses that go more into depth in programming, there are also complementary classes that go more into depth in electronics. This course is a broad overview of techniques used in physical interaction design.