Debugging my Creative Process

I’ve been taking print making classes this year, and have really enjoyed exploring something new. What’s been particularly interesting for me is seeing parallels between what I think of as a creative hobby – print making – and what I think of as creative work – writing software.

I showed my work publicly for the first time two weeks ago. The day after the show I had booked time in the studio. I showed up after work that day with my tools, anxious to get back to printing. It had been a couple weeks since I’d been in the studio, and last time I was there had been very productive: I’d spent the entire day working with the same image, producing six unique prints as I tried to add more texture and depth to the precise lines of the stencils I’d been creating. The result was a set of prints which were somewhat uneven in quality, but which showed a progression of control and vision. With each one I tried something a little different, until I felt like I had a good understanding of what I really wanted. Going into the studio that evening, I had about three hours of printing time, and hoped to bring that same exploration to another image.

I did wind up with five prints that evening, but none of them resonated for me like the Golden Gate Bridge prints did. As I pulled each print, I’d look at it, realize it wasn’t what I’d had in mind, and try to think about what to do next. Time in the studio usually passes quickly, and I feel like I’m racing the clock to do everthing that comes to mind. But that evening felt disjointed and choppy, and when it came time to clean up, I was ready to go home. I’d tried serious, whimsical, and abstract, and none of them felt like they worked for me that night. As I rode home from the studio, I felt disappointment. The experience wasn’t the effortless expression of creativity I was used to, and the work I had produced didn’t speak to me as I hoped and had come to expect.

The next morning I looked over the pieces again, and I realized that in each case there was one or two things that I didn’t like, which overwhelmed the rest of the piece. In one case I made a choice about negative space that turned out to be the wrong one. In another I tried to do too much at once, and my vision hadn’t translated well onto paper. As I stood there looking at each piece, I thought to myself, “Why didn’t you just do this exact same image again, but change the aspect you didn’t like?” Somehow I’d forgotten that it was OK to repeat yourself, to try again if the result wasn’t what I was looking for. I’d fallen into the trap that creativity is all about the flash, the spark, and that it just magically happens.

If I think about writing software, I’m well aware that getting the result I want is real work: we have test suites, debuggers, and continuous integration tools for a reason. We often don’t get it right on the first try. Just because the “test suite” for print making is personal and subjective doesn’t make iteration any less important.

I had my first linocut class Wednesday evening. Linocut involves carving a linoleum plate with an image, which you can then use to make a print. Our instructor asked us to bring a simple image to use for our first plate, and to get some experience with carving. I spent some time searching for the “perfect” image to use, something that I would be new and different and push the boundaries of my print making. In the end I wound up taking one of my monotype stencils and generated a scaled down version of it. And I couldn’t be happier with how it turned out.


Yes, it’s the same cat that I’ve been working with for the past couple months. But that doesn’t mean I’m not expanding my skill set, trying different techniques, and iterating. I have plenty of time to try new images out, and if I spend the time now, debugging my technique and learning how to iterate (just like I do with software), I think my ability to tackle more complex and involved work will grow, just like it has with software.

date:2011-04-24 10:22:00
category:printing, process
tags:iteration, linocut, meta

Managing my Emacs packages with el-get

Update (20 April 2011): I’ve now tried this on my old MacBook running OS X 10.5. The bootstrap script initially threw an error, which I tracked down to an outdated version of git. Once I upgraded git and installed bzr (used by the python-mode recipe), I started Emacs and was rewarded with a fully functioning installation, complete with the extensions I want.

I’m on vacation for two weeks between jobs, so of course this means it’s time to sharpen the tools (because writing programs to help you write programs is almost always more fun than actually writing programs). I’ve been an Emacs user for many years, and of course I’ve customized my installation with additional modes and extensions. Previously I would check out code that I needed into a vendor directory, and then load it manually in init.el. And this worked fine, but that doesn’t mean I [STRIKEOUT:can’t] won’t spend a chunk of my day making it better.

A friend mentioned el-get to me, and I decided to give it a try. I like the combination of recipes for installing common things, and the fact that your list of packages is very explicit in init.el (so if I need to dig into one of them, I know exactly where to begin). Additionally, since I’ll have a new computer issued for the new job, I also wanted to get things into shape so that I could easily replicate my preferred editing environment. I wound up creating a small bootstrap file to help things along, getelget.el.

getelget.el checks to see if el-get has been previously bootstrapped, and if not, performs the lazy installation procedure. After it makes sure el-get is available, it loads and executes el-get. So if you need to get a new machine up and going with Emacs and any extensions, you can drop in your init.el and getelget.el, and Emacs will take care of the rest.

To use getelget, define your el-get-sources like you normally would in init.el:

(setq el-get-sources
         ;; etc...
       )  )

Then load getelget (the following assumes you have getelget.el in your user emacs directory along with init.el):

;; getelget -- bootstrap el-get if necessary and load the specified packages
   (concat (file-name-as-directory user-emacs-directory) "getelget.el"))

getelget will handle bootstrapping, loading, and executing el-get.

getelget is pretty trivial; you can download it here, and I’ve waived any rights I may hold on the code using the CC0 Public Domain Dedication.

date:2011-04-19 22:17:05
category:development, tools
tags:el-get, emacs

The New Thing

There’s lots of change going on in my life right now, and the most visible external indication of that is my job: April 15 was my last day at Creative Commons. I’ll be joining the engineering team at Eventbrite in early May. I’m really excited to be joining Eventbrite, and am really looking forward to seeing what it’s like to work on a consumer web product.

Someday I’ll figure out how to write about why I’m making the move, and why now; for now, it’s the new thing in my life, and I’m looking forward to seeing how it goes.

date:2011-04-18 22:44:05
category:my life

Learning from the Web for Learning on the Web

Earlier this year Steven Stapleton from University of Nottingham emailed me and asked if I’d like to be a keynote speaker at OpenNottingham. I accepted, and was very excited to be part of the day. More recently, an opportunity unexpectedly presented itself, and I decided that after seven years, it was time to move on from Creative Commons. As a result of the timing of my departure, I was unable to travel to the UK this past week. What follows are the remarks I delivered via Skype for the event.

Update (16 April 2011): Video of my presentation via Skype is up on YouTube.

This is actually my last presentation as CTO of Creative Commons, and as I was preparing for it this week, I spent some time thinking about what questions are on my mind about open education, and where I look for answers. CC is a little different from a lot of organizations working in this space: we develop legal and technical infrastructure as much as anything, and as such we wind up with visibility into many different domains. I hope this perspective can help us think about the future of open education, and what’s next.

Let me begin by stating what I believe to be true, and what I hope you agree with going into this. First, there has been an amazing explosion in activity surrounding eduction and learning on the web. In less than ten years we’ve seen words and acronyms like OER, OCW, metadata, and repository enter our collective consciousness, and seen myriad exciting projects launch to support open education.

Second, there is a feeling that the web, the internet, can help us deliver educational materials to audiences that are exponentially larger, with only incremental increases in cost. This broadening of delivery puts us in a position to reach and empower people in ways that they have not been before: life long learners, remedial learners, and others who may be under served by traditional models.

And third, we aren’t there yet. There are still challenging questions that we haven’t quite figured out how to answer, or that we’re just beginning to explore. For example: How do users discover open educational resources on the web? How do we determine what our impact and reach is? And what do we call success? I want to spend the next 10 to 15 minutes talking about some possible answers and things that I’ve been thinking about over the past year. What can a very selective history of the web tell us about where we are and what the future holds for online education.

The first two statements I made about learning on the web — that there has been a massive surge in interest and activity, and that there is a potential to reach vast audiences with only incremental additional cost — could very well have been made about the web itself in its early days. People were fascinated by the potential of this new technology, and rushed to stake their claim by publishing their own documents, sharing their knowledge. Now if I were you I’d be thinking, “Right, but I think we’re doing something a little more important than uploading scans of our favorite unicorn photos to GeoCities.” True enough, but the point is this: people were publishing, and they weren’t sure what came next.

As people began uploading and creating content and we saw this rise in the creative output of people on the web, there was an increasing need to capture this and organize it in an approachable form. You might have been able to draw a diagram of the early web on a sheet of A4 paper, but that rapidly became inadequate. The question of how you approach and understand this network of content became critical to answer. And the first answers were decidedly hands on processes. Yahoo did not start as an index of text on the web: it started as a way to search a hand curated set of resources, classified by human beings into categories and topics. DMoz, another directory of the web, took a similar approach: organizing resources into a hierarchy, bringing order where there was none. In both cases this was fundamentally a task of curation: what belongs in the list, and what does not. These were the web’s librarians, trying to provides an ontology that was flexible enough to handle the growing amount of content, and rigid enough that people could understand it.

Now there were definitely issues with this approach when applied to the early web, not the least of which was that they did a poor job of coping with different languages and cultures. Additionally, these directories didn’t leverage the fundamental relational nature of the web. Cross-referencing the list of resources categorized under different facets, such as language and subject, wasn’t an easy task.

As the web continued to grow, people began to realize that they could exploit the natural structure of the web — documents and links — to build a better index. Instead of searching the terms that a human used to label a resource, we could write software that followed links and created an index of the resources. So instead of hand curating a list of documents, we could trust that things linking to a document probably had a similar topic, or described the topic of what they were linking to. And eventually that “good” resources would have more links than those that did not.

It’s interesting to note that even as this transition from searching a curated list to searching a text index was taking place, the curated list still served an important purpose. Both the Yahoo index and DMoz were useful as the seeds for initial crawls. By starting with those pages, and following the links on them to other pages, software was able to being building a graph of content on the web. Curation was an important activity on its own, but it also enabled bigger and better innovations that weren’t obvious at the beginning.

So we look at the evolution of the web and see the move from curation as the primary means of discovery to curation and links as the seeds for larger and more complex discovery.Learning on the web has done a lot of this basic curation, both on a de facto and explicit basis. OCWC members publishing lists of open courseware, Connexions publishing modules and composite works, and OER Commons aggregating lists of resources from multiple sources are all acting as curators.

It’s this evolutionary question that we’re starting to face now: what is a link in online learning, and how do we compose larger works out of component pieces while giving credit and identifying what’s new or changed? Creative Commons licenses and our supporting technology provide a framework for marking what license a work is offered under, and how the creator wishes to be attributed. There seems to be wide-spread acknowledgement that linking as attribution is reasonable, but what about linking to create a larger work, or linking to cite a source work? Too often it is not obvious what components went into a work, or how to find them in a useful format for deriving your own work.

Last week I came across a website developing a free college curriculum for math, computer science, business, and liberal arts. At first I was really excited: the footer of the pages contained a link to Creative Commons Attribution license, and a full curriculum for things like computer science under a very liberal license is the sort of thing that gets me excited. But as I dug deeper, I found that the curriculum was actually more like a reading list: links to PDFs and web pages with instructions to read specific sections, pages, or chapters. Now it’s really exciting that the web and educational publishing on the web has progressed to the point where someone can act as a curator and assemble such a reading list, where all the resources are accessible.

What’s frustrating — and illustrative of this question of what a link means in education and how we create largely, composite documents, I think — is the information that’s missing. Links to PDF files and other sites are a start, but they don’t capture the actual relationship that exists between the component pieces. By exploring what the graph of educational works looks like, we can enable applications and tools that help answer the question of discovery, like the search engines that grew out of exploring the graph of documents on the web at large.

So with a multitude of ways to discover content on the web, publishers began asking the question: who is finding me, how, and what are they actually looking for? Am I reaching the individuals that I think I am, and how are they interacting with my site. In other words, what is success, and how do I measure it? The web as a whole answered this question through the development of tools like Google Analytics, Piwik, and others. For many publishers, success is defined as more visitors who spend more time on the site. I’m not actually sure that’s true for education on the web, or at least I don’t think it’s the entire story.

When I think about success for open education and education on the web, I think about both web metrics and education metrics. Web metrics look a lot like everyday web publishing metrics: visitors, time on site, bounce rate, etc. If you’re trying to drive visitors to a particular site or service, you might also measure conversions as part of your success metrics. Education metrics, however, are a lot tougher to work with on the open web. We may want to determine whether a particular resource helps people pass an assessment, but where does that assessment come from? And how do I even find alternative resource to compare results with?

As we continue to curate a pool of educational resources online, one of the facets that I’ve encountered frequently of late is how OER align to curricular standards or quality metrics. This is an example of curating for something other than the subject. That is, while early curation systems classified web pages based on their topic, there’s no reason they couldn’t classify based on what curricular standard they address instead. Embracing curation has the potential to enable new assessments and metrics that build on the nature of the web and are more broadly applicable. For example, if online education embraces a culture of linking and composition using links, it’s possible to imagine a measure of reach and impact based on links and referrers, instead of just visitors.

As we begin to explore these questions, there’s also the opportunity for this community to lead developments on the web instead of just following past trends. As this community of practice continues to develop, we can learn from the past and iterate to increase our impact and reach. While search engines initially just leveraged links to determine where a resource fits in the web, there is increasing recognition that structured data can help us develop tools that provide better results and user experiences. Web scale search providers are beginning to leverage this information to improve search results to include information like the number of stars a restaurant is reviewed at, or the cost of a product you searched for. Creative Commons uses structured data to indicate that the link to our license isn’t just another link, it actually has some meaning. By annotating links with information about their meaning, we can enable tools which give weight to different relationships based on context.

There is a great opportunity to develop rough consensus and working code around how structured data can be used to indicate the relationship between parts of a curriculum, alignment of resources to a curricular standard, or the sources a work uses. This is the next reasonable step for the use of structured data and curation on the web, and the open education community has a real opportunity to lead. As we publish resources online, we can develop a practice of linking, annotation, and curation.

This high level, incredibly vague, and very, very selective history of the web shows that there are many lessons education on the web can learn, and at least an equal number of areas where it can lead. There is excitement and passion, but we need to ask ourselves some hard questions as we move forward. What does success look like, how do we measure our impact and reach, and what can we learn from those who have gone before.

Thank you.

date:2011-04-08 15:45:10

Web Progress Notifications in Fennec

Working on OpenAttribute for Firefox Mobile (Fennec) yesterday, one of the first challenges I faced was how to get notification that a page had finished loading. In the desktop version, I attach a listener for all tabs using gBrowser.addTabsProgressListener. Unfortunately with browsers running in their own processes, this approach doesn’t work on Fennec. I spent quite a bit of time trying different approaches, all with the intent of creating a progress listener and attaching it myself. The Electrolysis wiki page says that one of the nice side effects of the message passing model is that this problem is easy to solve, but it sure didn’t feel easy.

I don’t remember why I eventually started looking at the mobile-browser source tree, but as I looked through browser.js, there it was:

messageManager.addMessageListener("Content:StateChange", this);

It turns out it is really easy: you can listen for Content:StateChange or Content:LocationChange, and get access to the same details you’d normally have in the WebProgressListener implementation.

date:2011-04-04 20:41:18
tags:fennec, firefox, mobile, mozcc, OpenAttribute

Unexpected Attribution

I’m working on adding support for Firefox Mobile to OpenAttribute this weekend. I’d hoped to get that done in time for the official launch, but, well, things have been a little busy. Firefox Mobile (Fennec) uses Electrolysis, a multi-process architecture that’s a little different from what I’m used to. Looking at the documentation and APIs, it actually looks a little closer to Chrome’s extension architecture. I was looking at tutorial videos yesterday, and downloaded Mark Finkle’s boilerplate addon to get a better look at the overlays.

As I explored the boilerplate, I opened the build script. Imagine my surprise when I read this comment at the top of the script:

# -- builds JAR and XPI files for mozilla extensions
#   by Nickolay Ponomarev <>
#   (original version based on Nathan Yergler's build script)
# Most recent version is at &lt;;``

I have to assume this is based on the build script I developed for MozCC in 2004. I doubt anything from the original still survives (or at least I hope not), but appreciate the credit.

date:2011-04-03 10:53:00
tags:fennec, firefox, mozcc, OpenAttribute