Recap on the Digital Inquiry Symposium

In a Berkeley student’s life, using a computer or going on the Internet is like breathing. Yet, according to a study cited by the national Digital Literacy Initiative, 28 percent of Americans do not use the Internet at all.  The frequency of digital illiteracy was the first of many ideas that I encountered while attending the Digital Inquiry Symposium sponsored by the Berkeley Center for New Media this weekend.

The first speaker of the symposium was Helen Milner, the chief executive of Online Centres Foundation, an organization in the UK that helps communities tackle social and digital exclusion. Milner discussed the issue of digital illiteracy, especially in the UK, and how local communities are combating this issue by establishing computer and Internet courses and mini computing stations in local cafes, community centers, etc.  Just in these past few years, tens of thousands of people have become digitally literate because of the organization.  Not only are people more engaged with their community, it also ends up helping the government save millions of dollars because many processes can now be done online.

Although the symposium started off with increasing awareness of digital illiteracy, it transitioned quickly into the innovations and implications of new digital technologies.

Lars Erik Holmquist, a Yahoo! research scientist, spoke about “grounded innovation,” a concept of innovation which strikes a balance between inquiry and invention.  In this approach, one considers both technology and users as resources to drive innovation.  Part of inquiry is conducting ethnographic studies and finding out how things are. Part of invention is “finding something really new,” brainstorming and coming up with flash of lightning ideas.  Holmquist said that part of the problem with inquiry is that it can be hard to transcend the material and develop new ideas, and part of the problem with invention is the potential to completely disregard human needs and/or technical possibilities. Grounded innovation aims to be both inventive and grounded. “We can create instances of future technology, prototyping and testing again and again, until an idea moves from being blue-sky crazy to familiar and self evident,” he said.

The segment on “Insight and Innovation” continued on with rhetoric professor Hélène Mialet speaking about Stephen Hawking. Mialet said that Hawking is often portrayed as the lone scientist, but in reality he has to delegate tasks more than anyone else. Hawking can only move one eye, so everything that he is thinking has to be manifested through other people or machines. As an example, his students are the ones who are responsible for both the calculations and drawing out the diagrams of Hawking’s ideas.  Mialet likened Hawking to a manager at the head of the company, and the company is his extended body.  Mialet’s speech made me think more about what constitutes intelligence and the value of group intelligence.

Following Mialet was rhetoric professor and Center for New Media director David Bates, who talked about “Understanding Insight in the Age of the Computer.”  Much of his talk echoed what I had learned in my Intro to Cog Sci (Cog Sci c1) and Basic Issues in Cognition (Psych c120/Cog Sci c100) classes in regards to imagining whether or not the human mind/body is a machine (hollaat the “Mind as Machine” metaphor from my Mind and Lang class) and theories about how we develop insights to problems, so that was great for me. Typical of cog sci talks, Bates also ventured into asking “What is consciousness?” He proposed the idea that the human form insight as a thing that interrupts itself and suggested that consciousness is that place of interruption.

Day 2

iSchool professor Geoff Nunberg kicked off Day 2 by exploring the uses of digital linguistic corpora.  Digital makes traditional data more accurate and accessible, and it also tracks changes in the frequency of words. According to Nunberg, in the simplest case, lexicalization more-or-less coincides with the emergence of concepts, such as “propaganda,” “app” and “tweet.”  Through examining digital corpora, one can find the origins, appearances and frequencies of words in texts.  “Trendy” became trendy in around 1962, “upscale” in 1966, “middle America” in 1969 and “yuppie” in 1981. In addition to tracking frequencies, digital data also allows one to track the shift of word meanings. Nunberg used the example of the word “asshole.” In around 1970, the term becomes a term of abuse rather than a term for a body part, and it also became used much more frequently.  But how does one find out whether “asshole” is being used in the pejorative or medical sense?  Nunberg provided the simple test of tracking the frequencies “you asshole” compared to “your asshole.”  The emergence of the word “asshole” gradually displaced the word “phony,” and Nunberg joked that had J.D. Salinger written The Catcher in the Rye at a later date, perhaps all of the people Holden Caulfield calls “phonies” would be called “assholes” instead.

Following in the vein of linguistics, French professor Mairi McLaughlin spoke about incorporating quantitative tools into literary study. The digital humanities is still very much a developing field, and the classic dichotomy of quality versus quantity manifests itself in literary study as close reading versus distance reading. McLaughlin talked about how there is a tug-o-war between the qualitative and quantitative and how scholars discussing digital humanities often have polarized opinions on how to approach literary study. Her solution to approaching literary study while incorporating quantitative data is that humanities scholars should implement standards and methods proper to the field. In her own work, McLaughlin has worked with small corpora and encourages focusing on the interpreted hypothesis when selecting and organizing quantitative data.

The trek through Big Data continued with Yahoo! research scientist David Ayman Shamma. He discussed his research on data mining and tweet feeds, with the idea that people use technology to share and communicate and that social conversations happen around media. Nonetheless, engagement in the media (such as watching a video) is not necessarily correlated with the amount of social data people output (such as tweeting or writing chat messages). Shamma used the example of tracking tweet feeds of the 2009 presidential inauguration to illustrate this point. According to Shamma, there was a ton of tweeting activity throughout the inauguration, but at the moment when Obama puts his hand on the Lincoln Bible, there is a sharp decrease in @ symbols in tweets—there is an interruption in the conversation (as @ symbols are usually directed towards another person).

“The moments of no signal are actually the moments of highest engagement,” Shamma said.

People are engrossed in the moment and are not conversing with others.  Later on in the Q&A session, professor Mialet related engagement in media to student engagement in classrooms affected by the use of technology, more specifically laptops.  Just as a side note, this is definitely something worth exploring in the future—speaking from personal experience, using a laptop in class does change the learning experience.

IEOR and computer science professor Laurent El Ghaoui and San Francisco State University international relations professor Sophie Clavier delved further into the implications of data mining in their talk about “The Statistics of News Media.”  El Ghaoui presented various approaches to analyzing text corpora.  In comparing two columnists, there is co-occurrence analysis and the classification approach. Co-occurrence analysis produces a list of words that both columnists frequently use, which can be helpful in determining what are current trending topics.  In the classification approach, the list of words produced consist of words that characterize each columnist. (As a digression, perhaps this is part of  what I Write Like uses for its site in determining which famous writer one writes like.)

Clavier then explained how media discourse constructs frames (hollaat Mind and Language, again), which could potentially change the likelihood that people support certain policies. Media discourse can lead us to see other countries as dichotomies of good and evil, as these frames are fairly easy to construct. These frames develop over time, and it becomes difficult for the public to react because the frame is established in their minds.  In regards to big data text analytics, it uses machine learning on a very large scale with paralleled algorithms.  At the conclusion of this talk, El Ghaoui touched upon the interdisciplinary nature of analyzing big data. Although engineers can create these machine learning algorithms, “social scientists are much better equipped with coming up with what types of questions we can ask,” he said.

Fittingly, Cathryn Carson from the history department (which, by the way, is actually a part of the Social Sciences division here at Cal, not Arts and Humanities) spoke next about D-Lab, a new social sciences data research laboratory focused on data mining, modeling, networking, archiving and collection design. Although policies are now becoming more and more data-driven, Carson said that when she was trying to understand where the word data-intensive comes from, she fell back to reading and ethnographic fieldwork, echoing the sentiment of not forgoing other methods in light of numbers and data. She also stressed the importance of research design. “It’s not about the scale about the data—it’s about how are we going to pose good questions and answer them,” Carson said. Of equal import is developing research designs that accommodate both the strengths and weaknesses of the data. To conclude her speech, Carson said that she hopes the new lab will allow people to “think deep thoughts, do cool things and find answers to questions.”

The symposium then shifted towards the arts, as writer/artist/professor at San Francisco Arts Institute Meredith Tromble, presented on digital art. Similar to Nunberg speaking about the evolution of words, Tromble talked about the word “interdisciplinary.” The word first came into use in WWII as a way to talk about mixing knowledge and professions, and since then our lexicon has expanded to include words such as “crossdisciplinary” and “transdisciplinary.”  In the interdisciplinary spirit, the digital realm is undoubtedly pervading the art realm. “The presence of digital in art is like air,” Tromble said. (Not from Tromble’s talk, but as an aside and slightly related, check out this article about video games at the Smithsonian.) Hybrid media is becoming the norm in performance, film, writing, etc., and pure painting and sculpture are becoming more minor disciplines. With the introduction of technology into art, there are still many channels to be explored. “Our heads are on fire with new thoughts,” Tromble said.

Amidst all the buzz about technology, Nick Hoff, a SF-based writer, translator and bookseller, brought us back to one of the most traditional form of information dispersion—books!  Hoff talked about Scanners, a project that he worked on with Matt Borruso, another SF-based artist. The basic idea of Scanners was that it was supposed to be a month-long pop-up shop in which every item was something that Hoff and Borruso thought was really great. It was meant to be a celebration of the physical—physical objects (books) in physical space (a bookstore).  According to Hoff, the physical space of a bookstore shapes the way the reader interacts with the text, and it also acts as a community space.

Hoff raised questions such as what kinds of texts, what ways of discovery are being left behind by the immateriality of digital culture, and what kind of public space will we be losing if we lose bookstores?  With the advent of buying books with one simple click, Hoff also raised the question, “Is our relationship with the object that we seek changed because now we get it immediately?” He suggested that there is fulfillment that comes with the process of searching itself, and physical search provides more possibilities and allows for stumbling upon things that you didn’t even know about. Scanners was well received by the public, but customers were confused as to whether or not the books in the store were for sale as the store had a feeling of an art exhibit. (The books were for sale.)

The culminating segment of the symposium was on “Technologies for Creativity.” Engineer and entrepreneur Kimon Tsinteris and designer Bret Victor started off by talking about “digital narratives,” more specifically, the digital version of Al Gore’s book, Our Choice, that they helped create.  Check out the video below to see the innovation and supercoolness of this digital book:



Earlier in the month, Victor spoke about the book and interactive data graphics at a BiD seminar, and this time around the talk was focused on redefining a digital book. On digital reading, they talked about how they wanted to honor the content and have as few techie, computer controls as possible. They did not want to incorporate anything that would interfere with the reading experience. At the same time, however, they also introduced elements that enhanced the reading experience. Victor talked about how in the book interactive elements hang off the page and beckon the reader to interact with it. Readers are able to see how global warming effects at the city, national and international levels, which is much related to their second point, “beyond multi-media.”

In making the book, Tsinteris and Victor wanted to personalize the reading experience through interactions. “Without the interactivity it’s just aggregate data,” Victor said. “It allows readers to ask their own questions.” One of the interactive elements allows readers to see the impact of reducing deforesting at a certain amount specified by the reader through sliding a scale. Another interactive element in the book is a graph of human population over time.  As we approach closer and closer to present day, the entire graph drops, which is meant to convey what exponential growth feels like. Tsinteris and Victor’s presentation gave a glimpse of the future of reading and the tremendous number of possibilities that accompany it.

Stanford computer science professor Scott Klemmer was the penultimate speaker, and he talked about learning through examples, especially for web design. Through his research, Klemmer found that example-based learning leads to better quality code. According to Klemmer, examples provide context, implementation and composition. While some may worry that examples will diminish the rate of novel ideas, Klemmer said that it actually does not—if people don’t have a better already, they’ll go with the example, but it won’t prevent them from having their own ideas. Among other projects that Klemmer is involved in, his free online Human-Computer Interaction course will soon become available.

The final speaker of the event was computer science professor Björn Hartmann, who spoke about design and development tools. In informal learning, people often learn through tutorials, and the predominant forms of tutorials are usually static text + images or screen capture videos. Static text and images are great for scanning, and screen capture videos are hard to navigate. Hartmann found that mixed tutorials including multiple forms lead to fewer errors and fewer repeated attempts than other tutorial formats. Tutorials play a huge role in DIY devices, and Hartmann also spoke about the re-emergence of maker culture.  While Arduino boards have in a sense democratized invention, they are somewhat restricting because people are restricted to off-the-shelf parts.  So what is the next frontier?  According to Hartmann, printed physical interfaces.

All in all, the event provided a survey of the various disciplines and applications of technology today and the implications it has on us and our world. It seems like we are progressing towards a more and more interdisciplinary way of thinking and working, which is precisely the core belief of berkeleyByte. ;o)

Leave a Reply

Your email address will not be published. Required fields are marked *

*

HTML tags are not allowed.