draw freely
Back to Inkscape.org
April 22, 2014

Inkscape Tutorials

One of the most frequently asked questions from Inkscape users is “how do i crop an image or object?”. Inkscape is primarily a vector graphics editor, so when someone asks this question, they could possibly mean something slightly different to a traditional image crop. This FAQ explains a few of the techniques that people actually mean when they say they want to crop in inkscape.

What do you mean when you say “crop”

  • If you have a single path or object (like a star or a rectangle), and want to trim or crop that object down, then Boolean Operations is probably what you need. (click here to jump to how to do this)
  • If you are exporting your inkscape document (SVG) to a bitmap (a PNG) with the “File > Export Bitmap” command, and want to only export a portion of your document, then changing the document size, and just exporting the document is probably the solution for your needs. (click here to jump to how to do this)



The Clipping feature is an easy and versatile way to crop vector or bitmap/raster objects in Inkscape. Let’s start with our little monster friend that i downloaded from the Open Clip Art Library:

Our monster is actually a group of 21 objects (a mixture of Ellipses and Paths). When clipping, it is always easier to group the objects being clipped. Grouping objects is as simple as selecting 2 or more objects and choosing Object > Group.

Choose the Rectangle Tool from the Toolbar, and draw a Rectangle over our poor little monster’s face.

Select both the the monster (the group) and the Grey Rectangle (a rectangle object). After selecting both, Choose Object > Clip > Set from the menu.

…and our monster is now cropped in a nice neat rectangle.

But what has happened to the rest of the monster? Well, one of the awesome things about the Clipping feature in Inkscape is that it is non-destructive.  We can remove the clip at any time by selecting the clipped object, and then choosing Object > Clip > Release from the menu.

…and now our monster is back to normal! Well, the rectangle that was clipping him before is still there, but trust me, so is the monster.

But can you crop your image with something other than a rectangle? Yes! Clipping in inkscape can be done with a wide range of clipping objects, including Text Objects…

Circle and Ellipse objects…

and Stars and Polygons.

Even a path can be used as a clipping object.

In fact, if you use a path as the clipping object, you can actually edit the clip path without having to Release it. First select the clipped object, then choose the Node Editing Tool. Your clip path will be outlined Green, with the normal path editing nodes visible.

Now, you can edit this path, and change the area that is clipped / cropped.

Clipping is one feature in inkscape that you will use time and time again. When working with imported bitmap / raster images, clipping is a easy way to crop without having to open up the GIMP. Additonally, when combined with blur, you can achieve some awesome effects like simple bubbles.

Boolean Operations

If you have a single path or object (like a star or a rectangle), and want to trim or crop that object down, then Boolean Operations is probably what you need. In Inkscape, you can use Boolean Operations to “crop” vector objects. This method works best if you have a single vector object that you want to trim. Note also, that unlike Clipping, this operation is destructive, you are deleting data from your SVG. This just covers one boolean operation (intersection) to achieve a basic “crop”. There are many other boolean ops in inkscape too.

Take the following landscape lineart that was vectorised with Inkscape: Selection_032

It is a single filled-in path with no stroke:

Selection_033To “Crop” this object, simply draw a rectangle over it, select both the rectangle and the landscape beneath:

Selection_036And choose Path > Intersection from the menu. Your landscape should now be cropped:

Selection_037 Additionally, you can “Crop” vectors into shapes other than rectangles, for example, draw a shape:

Selection_040Then choose Path> IntersectionSelection_041

Changing the Document size

If you are exporting your inkscape document (SVG) to a bitmap (a PNG) with the “File > Export Bitmap” command, and want to only export a portion of your document, then changing the document size, and just exporting the document is probably the solution for your needs.

Consider we have the following landscape drawn in inkscape. Note that the black box around the landscape is the document boundary.  Selection_044

If we were to go File > Export Bitmap (changed to File > Export PNG in newer versions of inkscape), and Set the export area to Page, we would get something like this:


To change the Document Boundary to a better size, and “Crop” our output, first draw a rectangle over where you want to “crop” the document to.

Selection_045Then, select the black box, and go to File > Document Properties, and choose “Resize Page to Drawing or Selection”. The Page boundary should resize to the size of the box. Note that you may need to check the box “Border on top of drawing” to see the page boundary. Also delete the black box.
Selection_046Now, when you use File > Export Bitmap (changed to File > Export PNG in newer versions of inkscape), and Set the export area to Page, your output should be a “cropped” version of our entire document:


April 22, 2014 09:40 PM

The Inkscape developers are hard at work developing the new version of Inkscape (0.91). This post is part of a series that will outline some of the awesome new features that will be available when Inkscape 0.91 is released.

The upcoming release of Inkscape has a new feature that allows an artist to easily view their entire image in greyscale. This feature is useful for those times you want to focus more on drawing layout and space weighting than colour. This mode is separate to the previous Display Modes of Normal, Outline and No Filters, so you can also view your no-filtered drawing in greyscale also.

To enable this mode in inkscape 0.91, simply choose View > Colour Display Mode > Greyscale.



If you want to try out this new feature already, you will need to  Download a “nightly” or “development” version of inkscape. Links to various builds of development versions of inkscape are listed at the Inkscape downloads page.

April 22, 2014 06:32 PM

Here is a tutorial / article that outlines the “Horizontal and Vertical” Bezier curve technique. Basically, with a little practice, editing beziers can become a lot easier when you align all your handles horizontally or vertically. While this tutorial talks specifically about illustrator, the concept also works with inkscape beziers.

In inkscape, holding down the alt key is the simplest way to constrain your bezier handles to the horizontal or the vertical.


April 22, 2014 02:00 PM

April 21, 2014

Inkscape Tutorials

There are plenty of places around where you can get your Inkscape questions answered, including the inkscape forum, inkscape answers on launchpad, and the inkscape section on the graphic design stackexchange.

But if you need an answer to a question in real time, the official #inkscape user channel on irc.freenode.net is the best place to go.

Never used IRC before? All Good, as the new inkscape website now has a web app that lets you connect directly through your webbrowser to all the knowledgeable folks in the #inkscape chat!

April 21, 2014 05:38 PM

The Inkscape developers are hard at work getting ready for the release of the new version of Inkscape (0.91). This post is part of a series that will outline some of the awesome new features that will be available when Inkscape 0.91 is released.

The Measurement tool is a new feature for the artist to measure the elements in their drawing. To use the measurement tool, simply choose the tool, click anywhere on the drawing and drag the ruler out. The measurement tool will live-update with measurements of length and angles as you pass over objects in your drawing.


If you want to try out this new feature already, you will need to  Download a “nightly” or “development” version of inkscape. Links to various builds of development versions of inkscape are listed at the Inkscape downloads page.

April 21, 2014 01:44 PM

April 20, 2014

Inkscape Tutorials

Tile clones is a powerful feature of inkscape, it allows you to create tiled copies of an object while tweaking the variables on how they are placed and styled. The dialog, however, can be daunting for the artist that is not familiar with it.

In this instalment of the “Inkscape Quick Tips” series on Tuts+, Aaron Neize provides a brief intro into the tile clones dialog, and shows you a few quick, yet awesome things you can achieve with it.


April 20, 2014 06:16 PM

April 18, 2014

Ted Gould

HUD shown over terminal app with commands visible

Most expert users know how powerful the command line is on their Ubuntu system, but one of the common criticisms of it is that the commands themselves are hard to discover and remember the exact syntax for. To help a little bit with this I've created a small patch to the Ubuntu Terminal which adds entries into the HUD so that they can be searched by how people might think of the feature. Hopefully this will provide a way to introduce people to the command line, and provide experienced users with some commands that they might have not known about on their Ubuntu Phone. Let's look at one of the commands I added:

UnityActions.Action {
  text: i18n.tr("Networking Status")
  keywords: i18n.tr("Wireless;Ethernet;Access Points")
  onTriggered: ksession.sendText("\x03\nnm-tool\n")

This command quite simply prints out the status of the networking on the device. But some folks probably don't think of it as networking, they just want to search for the wireless status. By using the HUD keywords feature we're able to add a list of other possible search strings for the command. Now someone can type wireless status into the HUD and figure out the command that they need. This is a powerful way to discover new functionality. Plus (and this is really important) these can all be translated into their local language.

It is tradition in my family to spend this weekend looking for brightly colored eggs that have been hidden. If you update your terminal application I hope you'll be able to enjoy the same tradition this weekend.

April 18, 2014 05:46 PM

April 14, 2014

Gail Carmichael

You may recall that I was going to a conference on a cruise ship in April.  Well, I'm back from Foundations of Digital Games 2014 and am happy to report that I have found another new favourite conference and community.  The conference went well and I made some wonderful friends. Win win!

It was a strange experience, being on a cruise ship for (mostly) academic purposes.  This was my first time on one, and to be honest, I actually prefer the resort experience more when it comes to vacations.  An overwhelming sense of "fake" was prevalent on the ship, and while resorts aren't necessarily better, on a cruise all you have is the boat.  No beach, no grass, etc.  I also didn't love the dark, cavernous feeling on most decks of the boat or the lengthy process to embark and debark.  Even the mall was kept dark and lit with neon lights most of the time.

But there is a big advantage to hosting a conference on a cruise ship: nobody can leave! This was really great for building community.  It was easy to find other attendees and spend some social time with them.  For example, on one of the early nights, there was a disco party happening in the mall.  At that point I was alone, wandering around, wondering what to do.

When I ran into some friends (old and new), I finally had someone to dance with, even if we were stuck with disco for quite some time.  I would not have danced disco alone, but with them, I had a blast.

I have to admit that the upper deck with the pools was a nice place to prepare for my paper presentations (lab mates, if you are reading this: pretend I prepared weeks in advance and practised at our meetings).  Sitting on a swinging chair looking out on the ocean is a good way to relieve last-minute stress.

And boy, was I stressed.  I wasn't worried about the actual presentation being good, but rather whether the audience of heavy-hitters in the stories-in-games field would think the work itself was any good.  It was a rare moment of feeling the imposter syndrome.  To make matters worse, I had two talks almost in a row! Good to get them over with, but no chance for feedback in between.

Fortunately, everything went very well.  The talk was good, and the questions afterwards were even better.  A lot of the people I was intimidated of in the first place made a point to tell me that my talk was interesting.  Later in the conference I even got to have an extended conversation with one of them, giving me both confidence and ideas.  (Learn more about what I presented if you're interested.)

After my talks and a couple of other interesting paper sessions, I escaped on my own for a bit to decompress.  The sun was starting to set, which was the perfect time to take a stroll around the boat.

The next day, the ship docked in Cozumel, Mexico, where two of my new friends and I went on a tour of Maya ruins (apparently you aren't supposed to include an "n") and visited a gorgeous beach.  I was really glad to have my talk behind me at that point as I could completely relax and enjoy it!

The last day of the cruise included more interesting talks and a lovely reception and dinner to cap it all off.  I left the following morning on a high, and already trying to figure out how to ensure I attended next year's conference.  I left feeling like I had finally found "my people," from my awesome roommate to the researchers with the same interests.  Thanks FDG, and hope to see you again soon!

April 14, 2014 12:12 PM

March 21, 2014

Gail Carmichael

Last year's Go Code Girl was a great success.  This year, we wanted to build on that as well as try something a bit different.  Keeping the same overall format, we're hosting two days of coding fun: the first at University of Ottawa and the second at Carleton University.

Instead of teaching the girls Processing again, we'll use the turtle module to draw fun pictures in Python, LOGO-style! Then, on day two, we're going to see what we can do with the Raspberry Pi.

I'm a big believer in teaching programming to beginners in a visual way.  Not only is it more exciting than printing text out onto a console, but it can help understand commands in a more concrete context.  In can even allow for an embodied understanding of concepts, for example by imagining yourself as the turtle moving around the screen, leaving a pen trail behind you.

It's no surprise that I'd favour using something visual to introduce Python.  But as you may know, I tend to favour Processing over Python as a first language.  Why use Python? Partly to get more first-hand experience in teaching it as a first language, and partly because it seems to be the language of choice for the Raspberry Pi.

I know that in the three hours we have on the second day, we won't be able to do that much with the Pis.  I want to try to give the girls enough knowledge and confidence to continue exploring on their own, should they wish to purchase a Pi of their own.  Thus, it's important that they know a bit of Python.

As an added bonus, I can experiment with the turtle approach for teaching programming to my arts and social science students next year.  I imagine it would be a big improvement over how I did it last fall.

I'll report back on how things went and provide a link to the workshop materials when it's all over.

March 21, 2014 03:48 PM

March 12, 2014

Gail Carmichael

Our two papers accepted to Foundations of Digital Games 2014 have been edited, improved, and uploaded.  I'd love to hear your thoughts on them.

A Framework for Coherent Emergent Stories

This paper is based on my thesis work.  The paper can be downloaded from the project page.

Crafting satisfying narratives while preserving player freedom is a longstanding challenge for computer games. The quest structure used by many games allows players to experience content nonlinearly, but risks creating disjointed stories when side quests only minimally integrate with the main story. We propose a flexible, scene-based emergent story system that reacts to the player’s actions while maintaining a reasonable amount of authorial control over the story. Based on the philosophy of story scenes as kernels or satellites, we define a minimal story graph that initially contains mostly disconnected nodes. Over time, the graph is built dynamically from offered to the player. In this paper, we describe the framework of our system and present an early prototype game as a case study. We end with a vision of how our framework could be used to create more coherent, emergent stories in games.

Chronologically Nonlinear Techniques in Traditional Media and Games

This paper was accepted as a work in progress.  A colleague of ours seem interested enough in working on it even further, which may lead to a journal paper in the future.  The paper can be downloaded from the project page.
Although stories in games have become more sophisticated over time, their use of nonlinear techniques has not yet become as prevalent as in traditional media like novels and films. Writers have largely excluded nonlinear techniques from their toolbox, possibly because of fears of introducing inconsistencies when player actions alter past events. However, as we show through a survey of common nonlinear techniques seen in television, novels, and film, games can and have avoided these inconsistencies while maintaining gameplay agency. Many players prefer a high quality static story incorporated into strong gameplay, making the insight from this discussion immediately useful in designing nonlinear game stories. We also discuss some ways in which nonlinear techniques can offer both gameplay and story agency, hopefully bringing the quality of game stories one step closer to their traditional counterparts.

March 12, 2014 11:51 AM

March 03, 2014

Gail Carmichael

Patrick Prémont, functional programming architect from local consulting firm Tindr, came to speak to my Programming Paradigms class last week. We're currently learning functional programming in Racket, so who better to speak to the class than someone who actually uses the functional paradigm in industry? The theme of Patrick's talk was that functional programming can create very reliable software when you combine the safety of functions without side effects with the compile-time checking of a good type system.

Functional Programming - Worth the Effort from Tindr Solutions

I personally got a lot out of the talk, so hopefully my students did, too.  I appreciated hearing that real companies are using functional programming in real applications.  I've really enjoyed learning Scheme/Racket as a student, but never really heard of anyone using the paradigm beyond perhaps the ability to create anonymous functions in otherwise imperative or object oriented languages.  The pitch about reliability made a lot of sense to me.

Another theme that struck a chord was the focus on using types to make the compiler tell you when you've done something wrong.  I've developed my own philosophy of effective programming throughout my (almost) 12 years of experience, and getting the compiler to catch your mistakes is a big part of it.  I always try to write code that doesn't assume its user will use it correctly, but instead try to produce an error when it's used incorrectly.  I usually think about this from an object-oriented viewpoint, so it was neat to learn how carefully designed types in a functional language can accomplish the same.

Check out the slides above for more details, and share your thoughts on the potential of functional programming when it comes to reliable software!

March 03, 2014 09:29 AM

February 12, 2014

Gail Carmichael

I recently received an email asking whether I've ever faced programmer's block.  The emailer was referring to those times you sit and stare at a problem but never make any progress.  You feel stuck and, in some cases, just give up.  How does one get past that?

Stupid Computer!!!
Stupid Computer!!! / f1uffster

The good news is that I think all programmer's face this feeling at one time or another, and most likely, they feel it rather often.  The thing that changes with experience is usually the level of complexity of the problems causing stuckness, not whether a programmer will get stuck in the first place.

That might seem really depressing, especially if you hear this after your first extremely frustrating moment of stuckness.  But it's not all bad news.  As you learn more strategies for getting unstuck, you don't stay stuck nearly as long.

Some strategies: learn to experiment.  When you first start programming, many tend to be a bit afraid of tinkering with code, trying different things with an eye toward understanding better what's going on.  It's not just trial and error; you have to be strategic.  You have to take the time to understand why something finally works, even if you hit upon the answer randomly.

To improve your experimentation skills, you need to learn to debug.  Whether you just print the values of variables at key locations or use a fully-featured graphical debugger, learn how to display as much as you can about your code so you can ensure your mental model of it is correct.  You must learn techniques that help you first narrow down where the problem is, then you can tackle figuring out what the problem is.  For example, you might narrow down that a problem appears inside a loop.  From there, you can start printing out the variables you are changing in the loop to see if they are what you expect.

As you learn more and more about programming, algorithms, data structures, and so on, your toolbox of problem solving techniques grows and grows.  Things that seemed so hard in the beginning are now no problem to spot, all thanks to experience.  Of course, new problems are introduced, but you know how to tackle them, thanks to your ability to experiment and debug.  It takes time, but damn, it feels good when you finally get unstuck again!

If you'd like to read about some more specific programming problem solving techniques, check out Think Like a Programmer, which I've reviewed here.  (I wasn't compensated to say this - I just really like the book!)

February 12, 2014 05:03 PM

February 07, 2014

Gail Carmichael

Last night I gave a lecture for our undergrad society on some of the more interesting recent developments in interactive storytelling, along with a preview of my own thesis work.  Below are my slides.

Interactive Storytelling in Games: Next Steps from Gail Carmichael 

February 07, 2014 12:56 PM

February 03, 2014

Kees Cook

Back in 2006, the compiler in Ubuntu was patched to enable most build-time security-hardening features (relro, stack protector, fortify source). I wasn’t able to convince Debian to do the same, so Debian went the route of other distributions, adding security hardening flags during package builds only. I remain disappointed in this approach, because it means that someone who builds software without using the packaging tools on a non-Ubuntu system won’t get those hardening features. Think of a sysadmin trying the latest nginx, or a vendor like Valve building games for distribution. On Ubuntu, when you do that “./configure && make” you’ll get the features automatically.

Debian, at the time, didn’t have a good way forward even for package builds since it lacked a concept of “global package build flags”. Happily, a solution (via dh) was developed about 2 years ago, and Debian package maintainers have been working to adopt it ever since.

So, while I don’t think any distro can match Ubuntu’s method of security hardening compiler defaults, it is valuable to see the results of global package build flags in Debian on the package archive. I’ve had an on-going graph of the state of build hardening on both Ubuntu and Debian for a while, but only recently did I put together a comparison of a default install. Very few people have all the packages in the archive installed, so it’s a bit silly to only look at the archive statistics. But let’s start there, just to describe what’s being measured.

Here’s today’s snapshot of Ubuntu’s development archive for the past year (you can see development “opening” after a release every 6 months with an influx of new packages):

Here’s today’s snapshot of Debian’s unstable archive for the past year (at the start of May you can see the archive “unfreezing” after the Wheezy release; the gaps were my analysis tool failing):

Ubuntu’s lines are relatively flat because everything that can be built with hardening already is. Debian’s graph is on a slow upward trend as more packages get migrated to dh to gain knowledge of the global flags.

Each line in the graphs represents the count of source packages that contain binary packages that have at least 1 “hit” for a given category. “ELF” is just that: a source package that ultimately produces at least 1 binary package with at least 1 ELF binary in it (i.e. produces a compiled output). The “Read-only Relocations” (“relro”) hardening feature is almost always done for an ELF, excepting uncommon situations. As a result, the count of ELF and relro are close on Ubuntu. In fact, examining relro is a good indication of whether or not a source package got built with hardening of any kind. So, in Ubuntu, 91.5% of the archive is built with hardening, with Debian at 55.2%.

The “stack protector” and “fortify source” features depend on characteristics of the source itself, and may not always be present in package’s binaries even when hardening is enabled for the build (e.g. no functions got selected for stack protection, or no fortified glibc functions were used). Really these lines mostly indicate the count of packages that have a sufficiently high level of complexity that would trigger such protections.

The “PIE” and “immediate binding” (“bind_now”) features are specifically enabled by a package maintainer. PIE can have a noticeable performance impact on CPU-register-starved architectures like i386 (ia32), so it is neither patched on in Ubuntu, nor part of the default flags in Debian. (And bind_now doesn’t make much sense without PIE, so they usually go together.) It’s worth noting, however, that it probably should be the default on amd64 (x86_64), which has plenty of available registers.

Here is a comparison of default installed packages between the most recent stable releases of Ubuntu (13.10) and Debian (Wheezy). It’s clear that what the average user gets with a default fresh install is better than what the archive-to-archive comparison shows. Debian’s showing is better (74% built with hardening), though it is still clearly lagging behind Ubuntu (99%):

© 2014, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

February 03, 2014 04:42 PM

January 27, 2014

Kees Cook

There will be a new option in gcc 4.9 named “-fstack-protector-strong“, which offers an improved version of “-fstack-protector” without going all the way to “-fstack-protector-all“. The stack protector feature itself adds a known canary to the stack during function preamble, and checks it when the function returns. If it changed, there was a stack overflow, and the program aborts. This is fine, but figuring out when to include it is the reason behind the various options.

Since traditionally stack overflows happen with string-based manipulations, the default (-fstack-protector), only includes the canary code when a function defines an 8 (--param=ssp-buffer-size=N, N=8 by default) or more byte local character array. This means just a few functions get the checking, but they’re probably the most likely to need it, so it’s an okay balance. Various distributions ended up lowering their default --param=ssp-buffer-size option down to 4, since there were still cases of functions that should have been protected but the conservative gcc upstream default of 8 wasn’t covering them.

However, even with the increased function coverage, there are rare cases when a stack overflow happens on other kinds of stack variables. To handle this more paranoid concern, -fstack-protector-all was defined to add the canary to all functions. This results in substantial use of stack space for saving the canary on deep stack users, and measurable (though surprisingly still relatively low) performance hit due to all the saving/checking. For a long time, Chrome OS used this, since we’re paranoid. :)

In the interest of gaining back some of the lost performance and not hitting our Chrome OS build images with such a giant stack-protector hammer, Han Shen from the Chrome OS compiler team created the new option -fstack-protector-strong, which enables the canary in many more conditions:

  • local variable’s address used as part of the right hand side of an assignment or function argument
  • local variable is an array (or union containing an array), regardless of array type or length
  • uses register local variables

This meant we were covering all the more paranoid conditions that might lead to a stack overflow. Chrome OS has been using this option instead of -fstack-protector-all for about 10 months now.

As a quick demonstration of the options, you can see this example program under various conditions. It tries to show off an example of shoving serialized data into a non-character variable, like might happen in some network address manipulations or streaming data parsing. Since I’m using memcpy here for clarity, the builds will need to turn off FORTIFY_SOURCE, which would also notice the overflow.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

struct no_chars {
    unsigned int len;
    unsigned int data;

int main(int argc, char * argv[])
    struct no_chars info = { };

    if (argc < 3) {
        fprintf(stderr, "Usage: %s LENGTH DATA...\n", argv[0]);
        return 1;

    info.len = atoi(argv[1]);
    memcpy(&info.data, argv[2], info.len);

    return 0;

Built with everything disabled, this faults trying to return to an invalid VMA:

    $ gcc -Wall -O2 -U_FORTIFY_SOURCE -fno-stack-protector /tmp/boom.c -o /tmp/boom
    Segmentation fault (core dumped)

Built with FORTIFY_SOURCE enabled, we see the expected catch of the overflow in memcpy:

    $ gcc -Wall -O2 -D_FORTIFY_SOURCE=2 -fno-stack-protector /tmp/boom.c -o /tmp/boom
    *** buffer overflow detected ***: /tmp/boom terminated

So, we’ll leave FORTIFY_SOURCE disabled for our comparisons. With pre-4.9 gcc, we can see that -fstack-protector does not get triggered to protect this function:

    $ gcc -Wall -O2 -U_FORTIFY_SOURCE -fstack-protector /tmp/boom.c -o /tmp/boom
    Segmentation fault (core dumped)

However, using -fstack-protector-all does trigger the protection, as expected:

    $ gcc -Wall -O2 -U_FORTIFY_SOURCE -fstack-protector-all /tmp/boom.c -o /tmp/boom
    *** stack smashing detected ***: /tmp/boom terminated
    Aborted (core dumped)

And finally, using the gcc snapshot of 4.9, here is -fstack-protector-strong doing its job:

    $ /usr/lib/gcc-snapshot/bin/gcc -Wall -O2 -U_FORTIFY_SOURCE -fstack-protector-strong /tmp/boom.c -o /tmp/boom
    *** stack smashing detected ***: /tmp/boom terminated
    Aborted (core dumped)

For Linux 3.14, I’ve added support for -fstack-protector-strong via the new CONFIG_CC_STACKPROTECTOR_STRONG option. The old CONFIG_CC_STACKPROTECTOR will be available as CONFIG_CC_STACKPROTECTOR_REGULAR. When comparing the results on builds via size and objdump -d analysis, here’s what I found with gcc 4.9:

A normal x86_64 “defconfig” build, without stack protector had a kernel text size of 11430641 bytes with 36110 function bodies. Adding CONFIG_CC_STACKPROTECTOR_REGULAR increased the kernel text size to 11468490 (a +0.33% change), with 1015 of 36110 functions stack-protected (2.81%). Using CONFIG_CC_STACKPROTECTOR_STRONG increased the kernel text size to 11692790 (+2.24%), with 7401 of 36110 functions stack-protected (20.5%). And 20% is a far-cry from 100% if support for -fstack-protector-all was added back to the kernel.

The next bit of work will be figuring out the best way to detect the version of gcc in use when doing Debian package builds, and using -fstack-protector-strong instead of -fstack-protector. For Ubuntu, it’s much simpler because it’ll just be the compiler default.

© 2014, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

January 27, 2014 10:28 PM

January 24, 2014

Gail Carmichael

Sir John Daniel, world authority on open, distance and online learning, came to Carleton for a special briefing on the future of online learning, covering topics such as the changing nature of the student body and its use of technology, myths and distractions in online learning, and opportunities for online learning to meet students’ needs.

via http://www.jisc.ac.uk/digifest

Many interesting tidbits were offered during Sir John's talk.  I would have loved the opportunity to get into any one of them a bit more, but I still left with some good food for thought.

After opening with the suggestion that post-secondary education is facing turbulent times, Sir John shared some interesting stats about the students taking online courses these days.  Apparently they are generally older than traditional undergraduates, paying more to study, working at the same time, and often immigrants.  Based on all this, they want to have their existing skills recognized, and they want credit courses.

Another interesting point: we tend to think that digital natives will be the ones who embrace online learning the most, but in fact, not only did the older crowd answer the survey more often, but there is no evidence of a divide between them and the younger students when it comes to technology.

So online education won't lock out any particular generation, and it can address many needs of students from all walks of life.  It is difficult to achieve wider access, higher quality, and lower cost at the same time when it comes to post-secondary education, but technology makes it possible.

Some tidbits from the talk:
  • MOOCs: these are technically not higher education, as they lack accreditation in general; the explosion seems to have more to do with the herd instinct than anything else (Mark Guzdial would probably be onside with a lot of what was said about MOOCs)
  • You can't ignore any of these three key components: study materials, student support, and logistics/administration
  • Institutions need to expect blended learning to evolve; what sort of flexibility will be required? Will campus buildings need to be refurbished for new purposes?
  • British Columbia was the first province to offer free, online open textbooks for the 40 most popular post-secondary courses (some of our profs here at Carleton are making their own open access books, too, such as Pat Morin and his data structures book)
  • Student assessment is fundamental to the learning process, and contrary to popular belief, you can actually be more creative with it in online environments
  • Contact North (who hosted this talk) recently published A Guide To Quality in Online Education, which looks like a worthwhile read
If you want to find the slides for this and similar talks, check out Sir John's website.

January 24, 2014 04:48 PM

January 17, 2014

Gail Carmichael

I've never been on a cruise before. Who would have thought that my first opportunity to sail would be for an academic conference on videogames? Come on, admit it. You're jealous.

via Wikimedia

The conference is Foundations of Digital Games, and the photo above shows where we'll be living for about 5 days in April.  This past fall, my supervisor and I worked really hard to get a paper we'd been sitting on into good enough shape to submit, and wrote up a whole new paper on my thesis work.  I was nervous about whether either would get in, but lo and behold, both did!

The paper on my thesis work, A Framework for Coherent Emergent Stories, got in with generally positive reviews, despite the very embarrassing fact that two important diagrams ended up as black boxes.  The one more negative review was actually extremely helpful - we will definitely be improving our write-up with those comments in mind.

The other paper was about non-linear stories in traditional media and games.  It was hard to know how this one would fare since the topic is more closely related to games studies, making me a bit of an outsider.  It was accepted in the work-in-progress track, which I am definitely satisfied with.  Lots of really useful comments in those reviews, too, so while this isn't the more important of the two papers, we should be able to improve it.

I have to admit that these successes are really great news after a recent string of rejections.  My publication luck is finally beginning to pick up!

January 17, 2014 12:39 PM

January 13, 2014

Gail Carmichael

I'm currently teaching a third year course on programming paradigms.  For functional programming we look at Scheme, and for logic programming, Prolog.  I took this course when I was an undergrad, and it looks like not much has changed since then.  I decided to take a look at the visual image and animation contexts now available to help students get a good feel for Scheme right up front.  I hope to continue using these sorts of examples to help make new abstract concepts about Scheme easier to understand (though how well I do with that remains to be seen).

I started my search for teaching materials at How to Design Programs, Second Edition.  This free online book teaches programming from the ground up, and assumes no prior experience.  In some ways this is good, even for third year CS majors who are learning functional programming for the first time.  It can't be our main text, but there are many good sections to reference.

Even better, the book features DrRacket's image and universe teachpacks.  That means that there are some fun, visual examples among the usual traditional applications of programming.  That's where I started to learn about these tools, since I had never seen them before myself.

Another resource I found helpful was How to Design Worlds, a supplementary online book that covers how to use an older version of universe and the related worlds.  From there, I found the chicken-crossing-the-road example you see in the screenshot above.  The code for the chicken project is available online - the only problem is that it's based on the old teachpacks, and doesn't run out of the box in the newest DrRacket IDE.

Not to worry - I updated the code (which was not much work in the end), and added a few useful comments.  If you'd like to make use of it, download the zip.  If you are just learning Scheme, try using this as a fun example to get you started.  Good luck!

January 13, 2014 03:46 PM

December 21, 2013

Kees Cook

For a long time now I’ve used mechanize (via either Perl or Python) for doing website interaction automation. Stuff like playing web games, checking the weather, or reviewing my balance at the bank. However, as the use of javascript continues to increase, it’s getting harder and harder to screen-scrape without actually processing DOM events. To do that, really only browsers are doing the right thing, so getting attached to an actual browser DOM is generally the only way to do any kind of web interaction automation.

It seems the thing furthest along this path is Selenium. Initially, I spent some time trying to make it work with Firefox, but gave up. Instead, this seems to work nicely with Chrome via the Chrome WebDriver. And even better, all of this works out of the box on Ubuntu 13.10 via python-selenium and chromium-chromedriver.

Running /usr/lib/chromium-browser/chromedriver2_server from chromium-chromedriver starts a network listener on port 9515. This is the WebDriver API that Selenium can talk to. When requests are made, chromedriver2_server spawns Chrome, and all the interactions happen against that browser.

Since I prefer Python, I avoided the Java interfaces and focused on the Python bindings:

#!/usr/bin/env python
import sys
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys

caps = webdriver.DesiredCapabilities.CHROME

browser = webdriver.Remote("http://localhost:9515", caps)

assert "My Bank" in browser.title

    elem = browser.find_element_by_name("userid")

    elem = browser.find_element_by_name("password")
    elem.send_keys("wheee my password" + Keys.RETURN)
except NoSuchElementException:
    print "Could not find login elements"

assert "Account Balances" in browser.title

xpath = "//div[text()='Balance']/../../td[2]/div[contains(text(),'$')]"
balance = browser.find_element_by_xpath(xpath).text

print balance


This would work pretty great, but if you need to save any state between sessions, you’ll want to be able to change where Chrome stores data (since by default in this configuration, it uses an empty temporary directory via --user-data-dir=). Happily, various things about the browser environment can be controlled, including the command line arguments. This is configurable by expanding the “desired capabilities” variable:

caps = webdriver.DesiredCapabilities.CHROME
caps["chromeOptions"] = {
        "args": ["--user-data-dir=/home/user/somewhere/to/store/your/session"],

A great thing about this is that you get to actually watch the browser do its work. However, in cases where this interaction is going to be fully automated, you likely won’t have a Xorg session running, so you’ll need to wrap the WebDriver in one (since it launches Chrome). I used Xvfb for this:

# Start WebDriver under fake X and wait for it to be listening
xvfb-run /usr/lib/chromium-browser/chromedriver2_server &
while ! nc -q0 -w0 localhost 9515; do
    sleep 1


# Shut down WebDriver
kill $pid

exit $rc

Alternatively, all of this could be done in the python script too, but I figured it’s easier to keep the support infrastructure separate from the actual test script itself. I actually leave the xvfb-run call external too, so it’s easier to debug the browser in my own X session.

One bug I encountered was that the WebDriver’s cache of the browser’s DOM can sometimes get out of sync with the actual browser’s DOM. I didn’t find a solution to this, but managed to work around it. I’m hoping later versions fix this. :)

© 2013, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

December 21, 2013 07:16 AM

December 19, 2013

Gail Carmichael

With the final exam over with, it's finally time to reflect on this semester's offering of my version of our 'Introduction to Computer Science I' course for non-majors, taught in Processing.  I wrote before about designing the course by starting with learning objectives, and then using those to come up with problems to study.

The general structure of the classes was to first show off a demo of the final product, start coding it, and learn concepts along the way.  I also eventually added more toy examples to illustrate each concept, thanks in part to the anonymous midterm feedback I solicited.  Here are main examples we looked at, and the skills we learned along the way.

Drawing Pictures with Processing

Creating a simple static drawing with code is a nice way to dive in without having to learn programming theory.  We referred to function calls "commands" that draw shapes like ellipses, and talked about some basic drawing and colour theory.

Then, as our pictures became more complex, we learned about variables as a way to cut down on repetition of hard-coded numbers.  We also saw that our drawing commands were a lot easier to follow and debug when we used meaningful variable names.

Interactive Painting with Processing

The logical progression has us move from static drawings to a simple interactive painting program.  This allowed for the introduction of Processing's active mode, and thus an introduction to the idea of functions.  We also used it as an opportunity to start seeing how to break a programming problem down into smaller pieces that could be solved more or less in isolation.


The next problem was a to make a working jukebox with three buttons/songs.  You could click on a button to play its song (turning off any other song that might be playing).  When a song plays, the corresponding button flashes.  For example, in the picture below, the second song is playing.

This example allowed us to learn about how the draw() loop works in Processing, how to play sounds, and what a Boolean value is and how it can keep track of the play state of a song.  We also learned about if-statements, and how to use them to determine which button a user clicks on.

Simple AI Character

I chose this problem based on the fact that many students in the class are in cognitive science.  We built a very simple state machine that was used to decide how to draw a sheep.  The sheep normally wanders around the screen toward the mouse.  If the sheep gets close to the mouse, it stops and drinks tea.  If the user then clicks while the sheep was stopped, a psychedelic ring of colours emanates from the sheep, as in the picture below.

While building this program, we learned about character movement and animation, tracking state with constant variable values, using if-statements and distance calculations to check how close the sheep was to the mouse, and writing functions that are self-contained (and thus could be reused in another sketch).  When making the rings animation, we learned about arrays to store the colours and while loops to draw them.

Weather Visualization

This problem was split into two parts, both based on real, local weather data available from the Canadian government.  For the first part, you would click on a time line and see the temperatures for the five closest dates.  For the second part, you instead click on a thermometer and see the dates of the five closest temperatures. The goal was to get students thinking about how they can visualize their data in order to explore it more easily.

Part one of the example gave an opportunity to get more comfortable with arrays and get a brief insight into reading and processing data from a file.  This also meant Strings could be formally introduced.  We learned about searching algorithms as well, since some kind of search to find the 5 closest temperatures (or dates) is needed.

In part two, we learned basic sorting techniques so we could more easily find the 5 closest values relative to temperature instead of dates.  We had been using the idea of "parallel arrays" for storing multiple pieces of information for the same entity up until now, but for sorting it is much easier to package all that information up into one variable.  Thus, we learned about objects and classes.

Social Media Coverage / Set Cover Problem

The last problem we got to this term was inspired by this post.  The context was the following scenario, dreamed up by some of my creative friends. Suppose we have a set of social media sites, like MySpace, Facebook, Google+, Twitter, and LinkedIn.  Each site reaches a different audience.  For example, Facebook probably reaches more young people than Twitter or LinkedIn.  Given the social media sites, and the corresponding audiences that each can reach, what is the minimum number of social media sites that still reach all audiences? (Minimizing this number means less work for the marketing team.)  This is the set cover problem.

The main concept introduced with this problem was the idea of storing objects in other objects, understanding how references work, and knowing when to share data versus make copies of it.  In the image above, each social media site acts like a button, and when pressed, it lights up the audiences the site reaches.  Site buttons should have references to the audiences, not their own copies of them.

December 19, 2013 12:37 PM

December 16, 2013

Ted Gould

One of the goals of this cycle is to decrease application startup times on the Ubuntu phone images. Part of my work there was to look at the time taken by Upstart App Launch in initializing the environment for the application. One of the tricky parts of measuring the performance of initialization is that it contains several small utilities and scripts that span multiple Upstart jobs. It's hard to get a probe inside the system to determine what is going on without seriously disrupting it.

For measuring everything together I decided to use LTTng which loads a small kernel module that records tracepoints submitted by userspace programs using the userspace tracer library. This works really well for Upstart App Launch because we can add tracepoints to each tool, and see the aggregate results.

Adding the tracepoints was pretty straight forward (even though it was my first time doing it). Then I used Thomas Voß's DBus to LTTng bridge, though I had to add signal support.

To setup your Ubuntu Touch device to get some results you'll need to make the image writable and add a couple of packages:

$ sudo touch /userdata/.writable_image
$ sudo reboot
# Let it reboot
$ sudo apt-get update
$ sudo apt-get install lttng-modules-dkms lttng-tools
$ sudo reboot
# Rebooting again, shouldn't need to, but eh, let's be sure

You then need to setup the Upstart App Launch environment variable to get it registering with LTTng:

$ initctl set-env --global LTTNG_UST_REGISTER_TIMEOUT=-1

Then you need to setup a LTTng session to run your test. (NOTE: this configuration allows all events through, but you can easily add event filters if that makes sense for your task)

$ lttng create browser-start
$ lttng enable-event -u -a
$ lttng start

To get the Upstart starting events from DBus into LTTng:

$ dbus-monitor --profile sender=com.ubuntu.Upstart,member=EventEmitted,arg0=starting | ./dbus_monitor_lttng_bridge 

And at last we can run our test, in this case starting the webbrowser once from not running and once to change URLs:

$ url-dispatcher http://ubuntu.com
# wait for start
$ url-dispatcher http://canonical.com

And then shut things down:

$ lttng stop
$ lttng destroy browser-start

This then creates a set of traces in your home directory. I pulled them over to my laptop to look at them, thougth you could analyze them on the device. For complex traces there are more complex tools that are available, but for what I needed babletrace was enough. All of this contributed to a set of results that we are no using to optimize upstart-app-launch to make applications start faster!

December 16, 2013 06:17 PM

Gail Carmichael

I'm really proud and honoured to have recently received one of our Faculty of Science Excellence in Teaching Awards! I got it last week at the faculty Christmas lunch.  Here's a photo from the event and the info posted to the School of Computer Science website.

The 2013 Faculty of Science Holiday Reception was the occasion for Mrs. Gail Carmichael to receive a Faculty of Science Excellence in Teaching Award. The award acknowledges Gail's teaching achievements and initiatives. Gail has a real passion for teaching computer science.

Besides all her teaching commitment, she has published four peer-reviewed papers on computer science teaching. The most recent one has been published in the Journal of Computing Sciences in Colleges. Gail has an impressive list of extra-curricular activities that reflect her commitment to teaching computer science for girls. They include advisory board membership to the Anita Borg Institute for Women and Technology, mentor for the Carleton University Women in Science and Engineering Mentoring Program, both organizer and instructor for Girl Develop It! Ottawa, and co-chair for communities committee of the conference Grace Hopper Celebration of Women in Computing. She has given numerous talks to present her views on computer science teaching. Her latest one was entitled Gram's House: Encouraging Girls to Consider Computer Science Through Games was presented at the 2013 conference Grace Hopper Celebration of Women in Computing.

December 16, 2013 01:11 PM

December 10, 2013

Kees Cook

A nice set of recent posts have done a great job detailing the remaining ways that a root user can get at kernel memory. Part of this is driven by the ideas behind UEFI Secure Boot, but they come from the same goal: making sure that the root user cannot directly subvert the running kernel. My perspective on this is toward making sure that an attacker who has gained access and then gained root privileges can’t continue to elevate their access and install invisible kernel rootkits.

An outline for possible attack vectors is spelled out by Matthew Gerrett’s continuing “useful kernel lockdown” patch series. The set of attacks was examined by Tyler Borland in “Bypassing modules_disabled security”. His post describes each vector in detail, and he ultimately chooses MSR writing as the way to write kernel memory (and shows an example of how to re-enable module loading). One thing not mentioned is that many distros have MSR access as a module, and it’s rarely loaded. If modules_disabled is already set, an attacker won’t be able to load the MSR module to begin with. However, the other general-purpose vector, kexec, is still available. To prove out this method, Matthew wrote a proof-of-concept for changing kernel memory via kexec.

Chrome OS is several steps ahead here, since it has hibernation disabled, MSR writing disabled, kexec disabled, modules verified, root filesystem read-only and verified, kernel verified, and firmware verified. But since not all my machines are Chrome OS, I wanted to look at some additional protections against kexec on general-purpose distro kernels that have CONFIG_KEXEC enabled, especially those without UEFI Secure Boot and Matthew’s lockdown patch series.

My goal was to disable kexec without needing to rebuild my entire kernel. For future kernels, I have proposed adding /proc/sys/kernel/kexec_disabled, a partner to the existing modules_disabled, that will one-way toggle kexec off. For existing kernels, things got more ugly.

What options do I have for patching a running kernel?

First I looked back at what I’d done in the past with fixing vulnerabilities with systemtap. This ends up being a rather heavy-duty way to go about things, since you need all the distro kernel debug symbols, etc. It does work, but has a significant problem: since it uses kprobes, a root user can just turn off the probes, reverting the changes. So that’s not going to work.

Next I looked at ksplice. The original upstream has gone away, but there is still some work being done by Jiri Slaby. However, even with his updates which fixed various build problems, there were still more, even when building a 3.2 kernel (Ubuntu 12.04 LTS). So that’s out too, which is too bad, since ksplice does exactly what I want: modifies the running kernel’s functions via a module.

So, finally, I decided to just do it by hand, and wrote a friendly kernel rootkit. Instead of dealing with flipping page table permissions on the normally-unwritable kernel code memory, I borrowed from PaX’s KERNEXEC feature, and just turn off write protect checking on the CPU briefly to make the changes. The return values for functions on x86_64 are stored in RAX, so I just need to stuff the kexec_load syscall with “mov -1, %rax; ret” (-1 is EPERM):

#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt

#include <linux/init.h>
#include <linux/module.h>
#include <linux/slab.h>

static unsigned long long_target;
static char *target;
module_param_named(syscall, long_target, ulong, 0644);
MODULE_PARM_DESC(syscall, "Address of syscall");

/* mov $-1, %rax; ret */
unsigned const char bytes[] = { 0x48, 0xc7, 0xc0, 0xff, 0xff, 0xff, 0xff,
                                0xc3 };
unsigned char *orig;

/* Borrowed from PaX KERNEXEC */
static inline void disable_wp(void)
        unsigned long cr0;

        cr0 = read_cr0();
        cr0 &= ~X86_CR0_WP;

static inline void enable_wp(void)
        unsigned long cr0;

        cr0 = read_cr0();
        cr0 |= X86_CR0_WP;

static int __init syscall_eperm_init(void)
        int i;
        target = (char *)long_target;

        if (target == NULL)
                return -EINVAL;

        /* save original */
        orig = kmalloc(sizeof(bytes), GFP_KERNEL);
        if (!orig)
                return -ENOMEM;
        for (i = 0; i < sizeof(bytes); i++) {
                orig[i] = target[i];

        pr_info("writing %lu bytes at %p\n", sizeof(bytes), target);

        for (i = 0; i < sizeof(bytes); i++) {
                target[i] = bytes[i];

        return 0;

static void __exit syscall_eperm_exit(void)
        int i;

        pr_info("restoring %lu bytes at %p\n", sizeof(bytes), target);

        for (i = 0; i < sizeof(bytes); i++) {
                target[i] = orig[i];


MODULE_AUTHOR("Kees Cook <kees@outflux.net>");
MODULE_DESCRIPTION("makes target syscall always return EPERM");

If I didn’t want to leave an obvious indication that the kernel had been manipulated, the module could be changed to:

  • not announce what it’s doing
  • remove the exit route to not restore the changes on module unload
  • error out at the end of the init function instead of staying resident

And with this in place, it’s just a matter of loading it with the address of sys_kexec_load (found via /proc/kallsyms) before I disable module loading via modprobe. Here’s my upstart script:

# modules-disable - disable modules after rc scripts are done
description "disable loading modules"

start on stopped module-init-tools and stopped rc

        cd /root/modules/syscall_eperm
        make clean
        insmod ./syscall_eperm.ko \
                syscall=0x$(egrep ' T sys_kexec_load$' /proc/kallsyms | cut -d" " -f1)
        modprobe disable
end script

And now I’m safe from kexec before I have a kernel that contains /proc/sys/kernel/kexec_disabled.

© 2013, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

December 10, 2013 11:40 PM

December 09, 2013

Gail Carmichael

I received a link to the following infographic in honour of this week's Computer Science Education Week.  It's about women in STEM in general, but has some interesting stats, so I figured I'd share it here.  Enjoy!

NJIT Online Master of Science in Computer Science

December 09, 2013 11:25 AM

November 27, 2013

Inkscape Tutorials

How do i rotate is one of the most frequently asked questions for beginner Inkscape users. There are multiple ways to rotate in inkscape, and this FAQ will show you the basics for four of them. The three different ways for rotating objects are: the toolbar buttons, the on-canvas rotation handles, transform dialog, and the keyboard shortcuts.

Method 1, the toolbar buttons

Rotating with the toolbar buttons only lets you rotate objects 90 degrees at a time. To rotate with the toolbar buttons, first choose the select tool:

Next, select the object that you want to rotate by simply clicking on it. Once you have clicked on the group once, arrows  and a dotted line should appear around the object:

Finally, press the rotate button on the toolbar to rotate your selection in 90 degree increments.

Method 2, Rotate on Canvas

Using the toolbar buttons to rotate objects in inkscape is by far the easiest method to discover. However, it only lets you rotate in 90 degree increments.

For a wider range of motion, using the on-canvas rotate handles is the way to go. As with the previous method, choose the select tool, and then select the object that you wish to rotate. The select box and handles should appear as before:

Now that the resize handles are visible, simply click on the object again to display the rotate handles:

Now that the rotate handles are visible, simply click on one of them, and drag it to rotate your object freely.

Method 3, the Transform dialog.

The free rotate that the on-canvas rotate controls (method 2) give are great, but what if you need more accurate control? When using method 2, you can hold down the ctrl key to limit the rotation to 15 degree increments, but what if you want to rotate the object by a specific, arbitary amount?

That is where the transform dialog comes in. First, as with the other methods, select the object that you want to rotate. Then open the transform dialog from the menu, Object > Transform.

Switch to the “Rotate” tab of the newly opened Transform Dialog, enter in how many degrees you need your object rotated, and click apply to rotate.

Method 4, the keyboard shortcuts

This method is super simple. Select the object(s) that you wish to rotate, and press the square brackets key( [ or) to rotate left and right by chunks.

For finer-grained rotation with the keyboard shortcuts, use the shortcuts alt + [ and alt + ] to rotate one degree at a time.

The four methods above outline the basics of rotating objects in inkscape. For further information about rotating and transforming objects in Inkscape the “Select Tool” chapter of the Inkscape Manual has more detailed information, including how to change the rotation point or rotation center of your object. The transforms chapter of Tav’s Inkscape Guide also provides some in-depth documentation of rotating in Inkscape.

November 27, 2013 03:30 PM

Gail Carmichael

This video is so beyond awesome that it deserves its own blog post.  I may or may not have shed a few tears watching it.  Keep on kicking butt, GoldieBlox. We need you.

(Edit: I've replaced the original video with an update after it was removed due to controversy surrounding the use of the Beastie Boys' song.)

Read more about this video and project here.

November 27, 2013 10:15 AM

Kees Cook

My UPS has decided that every two weeks when it performs a self-test that my 116V mains power isn’t good enough, so it drains the battery and shuts down my home network. Only took a month and a half for me to see on the network graphs that my outages were, to the minute, 2 weeks apart. :)

APC Monitoring

In theory, reducing the sensitivity will fix this…

© 2013, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

November 27, 2013 04:25 AM

November 20, 2013

Gail Carmichael

I posted a few testimonials about why arts and social science needs code last week.  I have a new one from a PhD student in psychology here at Carleton, and since it's such a good one I made a new post for it.

Chunyun Ma, PhD Candidate in Psychology, Carleton University

Why do I want to learn python?

There is the thrill of learning something new. There is also the practical part. I will focus on the latter today.

I study math cognition. What is that? You may ask. Simply put, I spend most of time studying how people process numbers and quantities. Several months ago, I become interested at how people do arithmetic. Not to bore you with the details, I needed to design an experiment in which participants would be doing mental addition and multiplication—all one-digit problems such as “2+3” or “3*5”. These problems would show up on a computer screen at a pre-determined interval for participants to solve. Everything seemed straightforward and easy except for one: I had more than 300 arithmetic problems to be included in the experiment. With the software I had at that time, each problem needs to be set up manually by point-and-click for it to show up properly on the screen.

Hours of point-and-click eventually led me to think: “there must be a smarter way of doing this”. Sure enough, Python entered my horizon at that time and proved to be much more efficient. With python, like with many other programming languages, I can write the code for presenting one arithmetic problem and recycle it for the rest of the problems. What’s better, I can stipulate in the code what output should be generated and in what format.

The advantage of Python over other programming languages is that it is relatively easy to learn. For psychology folks, knowing python also has an added bonus—being part of a vibrant community consisting of python users from all over the world who are knowledgeable of both python and experimental design. For example, Pygame and PsychoPy are two excellent tools for designing experiments, both of which are products through collective effort from the community.

November 20, 2013 10:05 AM

November 13, 2013

Gail Carmichael

For both my classes, I put together a study guide that included some general tips on studying as well as what topics to focus on.  Since the general advice works for many classes, I figured I'd share it here.  It's based on what I used to do as an undergrad.

York College Library Study
York College Library Study / CUNY Academic Commons

A good strategy is to create your own study notes, preferably on paper (manually writing will help you remember what you are thinking about better).  Here’s one possible way to make these notes:
  • Go through the course learning objectives, slides, and assignments, and make a list of key concepts that you should understand.
  • On a separate piece of paper for each concept, write the concept at the top of the page.
  • For each concept, write a general description of what the concept it about.  Try explaining it as if you were teaching someone who has never seen it before.
  • Look for ways the concept has been used in class.  How does it apply to the topic’s contextual question? What other contexts did we apply it to (in code, assignment questions, Poll Everywhere, etc)?
A few general studying tips:
  • Find ways to stay relaxed.  High stress will make your studying time far less effective. (Don’t leave it to the last minute!)
  • Try to stop working on your notes before your normal bedtime the night before the exam (if not sooner).  Get a good night’s rest - this really does matter!
  • If you have time, you can spend some time memorizing some of your notes.
  • On the day of the exam, review your notes.  By now you don’t want to be still trying to understand the concepts or memorizing key points if possible.

Be sure to state any assumptions you make when answering the questions.

Be strategic rather than starting at the beginning and working your way through.  Read all the questions first, then start answering the questions you are most confident about.

November 13, 2013 12:31 PM

November 08, 2013

Gail Carmichael

Part 2 of my "Why are we learning this?" guide for arts and social science students is a set of testimonials from people in the field that learned to code.  I'd like to share those testimonials here.

Angelica Lim, PhD Candidate


I do research on emotions across music, voice and movement. I believe that my background in programming has let me make unique psychological experiments that most people can't do.

Here's a video and article on something similar to my work: http://wheatlab.virb.com/dynamics Programs have also let me automatically detect things like reaction time, instead of spending hours poring through videos to do manual annotation.

Kathleen Woestehoff, Desktop Support Engineer for Gilt.com

I work in IT presently but that was after a purposeful (and challenging) career switch.

I studied psychology as an undergrad and got my MS in Education with advanced certification as a School Psychologist.

I learned some code through online courses I took. I've found it super applicable to be relevant and respected in my current career. I've heard from many people that basic HTML is nice to be able to adjust things on social media sites (though I've never taken advantage of what I know in this way).

I've seen it be highly sought after as a skill set in many companies for their marketing department, sales, graphic design, and more.

Emily Daniels, Software Developer and Research Analyst, Applied Research and Innovation at Algonquin College


Dear Fine Artist Learning to Code,

Being able to code to express yourself is one of the most powerful tools available to artists today. Artists should look at programming languages as they do any other medium- watercolor, acrylic, clay- they are all tools to allow you to develop and communicate your vision with your audience.

Artists who work with traditional mediums often have problems keeping up with the speed of society’s technological advances. What worked for Rembrandt and Picasso does not work for many of today’s artists. The scarcity surrounding the creation of a unique work of art contributes so much to the value of that work, but the minute your work is shared on the internet it loses value. The catch for artists is overcoming obscurity in a world inundated with information fighting for your audience’s attention. There is little you can do to help this, unless you are independently wealthy, like working several part time jobs to fuel your art, or you change the medium you work with and the way you communicate with your audience.

Though I still love to draw, after graduating from art school I took a hard look at the mediums I used to create art. Oils and acrylics are toxic to people, bad for the environment, and a fire hazard. The act of learning by painting on 2D surfaces and throwing them out or giving them to friends seemed selfish to me and a waste of resources. Personally I think we have a responsibility to reduce or eliminate our burden on the environment as much as possible, but this way of thinking does not fit very well with making traditional fine art. It took me a while to realize how much better it would be for me to transition my art making to my computer, but when I did it was a revelation. The learning process of writing code and scrapping it or sharing what you’ve written online is cheap and wastes less time and resources in comparison.

Most software projects depend on collaboration and also individual creation, which I find is a nice mix and less isolating than the traditional artist working alone in a studio approach to creation. Solving a problem with a team of people can be immensely gratifying and can give you a sense of belonging that is hard to recreate as an individual artist.

Learning to code and using it in a project allows you to become a modern artist in many different ways. You can tailor your work to a format that a wide audience can understand and interact with easily, which increases your reach and scope. Artists want to reach people on a fundamental level and engage with them in meaningful ways, eliciting responses that go beyond the surface reaction to uncover a deeper understanding and appreciation of our world. Touching people in a meaningful way is not owned by any particular medium but by the way the artist chooses to use that medium to communicate their message.

As an artist you probably already have a thick skin developed by years of crits where others continually tear down your work and expect you to pick up the pieces. This will prepare you for similar responses to your programs and is also immensely useful in software development. It seems from my experience that most computer studies programs don’t spend nearly enough time preparing people to respond well to negative or constructive feedback of their work. It would benefit a lot of developers to be able to take criticism in stride like an artist can, so if you can, you are ahead of the game.

You will need to hone your analytic and logical thought processes in order to program effectively, but if you have a solid background in working with abstract concepts in fine arts, it is not too hard to make the jump to visualizing how components interact and being able to mold them to get them to interact in the way you wish. A well built program is a beautiful thing, simple and complex at the same time. Any application you make or contribute to will still feel like you’ve made offspring from your mind that you are giving to the world. Stick with it- the work you’ll be able to create after learning to code is a million times more rewarding than what you can currently create.

All the Best,

Stephan Gruber, Associate Professor, Department of Geography & Environmental Studies, Carleton University


When I was about 25 and just about to finish my MSc, I had a key moment that I still remember: the day I was victorious over integrals. I knew what integral were from my high school math. But when I came across one in a paper, I would usually be left with an uneasy feeling. I knew what it meant, but had no idea, what to do with it. That day, I discovered, that I could discretize the integral in Excel and then just find an approximate solution. This then allowed me to explore the relationships I read about with practical examples that I calculated and plotted. It increased my understanding of the matter I was concerned with manyfold as now, I could interact with my problem and bring it from the abstract realm to, for instance, a plot or a number. Today, data processing and numerical experimentation, sometimes on high-performance computers, are a large part of my research. The power of this approach is what I believe enables me to chose great places to do research: mountain ranges across the world, the North, and Antarctica.

Learning how to organize and process large amounts of data and to write computer code has been the biggest single advance in my education. While it sounds counter-intuitive, I am convinced, that this is especially true for people who think they are not good at Math and who shy away from equations. Being able to write a small program to plot things is ultimately a tool to use the power of your brain better: viewing and manipulating a plot provides a broader experience than text and equations. If you work with data and models, you understand the subject you work with much better. And this will help you to better confront existing knowledge with the observations you make next time – or to plan more efficient observations.

And, there is another benefit. Writing computer code forces you to organize your thoughts. This is an analogy to how we see the writing of scientific text as an integral part of knowledge generation. Only when we formulate and structure what we have in our heads as text, do we see where it contains flaws or needs more work. Only then can you show it to someone and ask for feedback. Both ultimately let you grow in your understanding. The same is true for writing computer code. It helps us to be clear and to put the finger on areas that need work.

Learn how to program! It will be one of the most valuable skills you acquire in your studies. Don't be demotivated by having to spend many hours with the help function and Google to solve trivial things. All this helps you to acquire problem-solving skills and to be able to build the tools you need instead of being limited by what is available.

Stephanie Jackson, E-Communications Strategist, University of Ottawa

Word and Excel, just like the hardware of a computer itself, are tools to do a job. You wouldn't use a laptop to hammer in a nail, and you wouldn't use a screwdriver to analyze complex statistical data sets. Just about every complex task you work on requires the right tools for the right results. In the current technology-focused world, understanding the basics of major coding languages, as well as how they interact with one another, is critical for achieving the best results with the resources you have available. If I don't have a grasp on how various coding languages 'talk' to one another, even without having a proficiency in coding the language itself, I cannot effectively create a system which is both efficient and sustainable.

Of course, my experience is primarily web and web application based, so more php and ruby, less python, but the principle still stands ;o)

Kristen Jeanette Holden, Stay-at-home-mom, pausing from PhD studies at University of Chicago

I've got an MA in humanities from a top 3 school and focus on Japanese war/postwar film and literature. It's a small field with maybe a dozen experts outside Japan, and lost films and texts are still being found in secret vaults all over the world (the Japanese used film reels as fuel during the war, so colonial Korean and Manchurian political films were all thought to be destroyed). The crappy websites of eccentrics can lead to published papers and even full books because so little information is available. Just knowing html and Javascript is incredibly helpful. View Source got me through many big papers.

Who knew that my 15 year old self's desire to put up a page on hometown.aol.com with comic sans paragraphs over animated backgrounds and blaring midi music would help me in grad school?

Rachel B. Bell, Website Designer at Verbatim Design in Providence, RI

I majored in Studio Art at Smith College. My focus was on Photography and Reduction Linoleum Cuts. I took one class that included about a week of working with basic html. Little did I expect at the time that two years later, I would be a website designer and search engine optimizer.

Bonus: Chris Bosh, NBA Superstar

Being a kid of the 1990s and living in a house run by tech-savvy parents, I began to notice that the world around me was spinning on an axis powered by varying patterns of 1s and 0s. We’d be fools to ignore the power of mastering the designing and coding of those patterns. If brute physical strength ran one era, and automation the next, this is the only way we can keep up. Most jobs of the future will be awarded to the ones who know how to code.

We use code every time we’re on the phone, on the web, out shopping — it’s become how our world is run. So I take comfort in having a basic understanding of how something as big as this works.

Read the whole article: http://www.wired.com/opinion/2013/10/chris-bosh-why-everyone-should-learn-to-code/

November 08, 2013 03:57 PM

November 04, 2013

Gail Carmichael

In my Intro to Computers for Arts and Social Sciences class, I have been introducing the students to a bit of programming and algorithmic thinking in addition to the traditional topics (data representation and MS Office).  I try to connect back to why learning to code is useful, even in arts fields, but I am not always successful.  So, in hopes of doing a better job making my case, I decided to put together a document that summarizes the answer to the question "Why are we learning this?"

Source: http://en.wikipedia.org/wiki/File:ArtificialFictionBrain.png

This post summarizes some of what I've got so far.  I've also been collecting testimonials to share with the students.  These are stories from arts and social science students, graduates, and professors explaining why code is useful to them.  I will share those another day.

If you've got any ideas to add to this, please do share!

Why Learn About Data Representation

It’s inevitable: no matter what field you’re in, you’ll have to work with data in some form or another.  Having a good mental model of how information is stored on a computer can help you not only manipulate that data, but think about the best ways of collecting, storing, and analyzing it.

For example, if you need to collect images for a project, you might have previously just used colour images by default.  But now that you know how much less space grayscale images can take, you might decide that they are the better choice when colour is not needed.

Why Learn Computational Thinking

Computational thinking is about problem solving. We use computers to solve problems in every field these days. It’s not enough to be able to follow a tutorial on “how to do X” - you need a deeper understanding of how computation works in order to tackle previously unseen problems and know that you are solving them correctly and efficiently.

Here are some specific reasons to practice this type of thinking:
  • You need to know how to take a problem you need to solve and transform it into something a computer can actually work with. We think too high-level for a computer to “get” what we want to do without breaking things down into really specific chunks.
  • The world is becoming increasingly complex, and you need to be able to deal with that complexity.
  • Similarly, you need to be able to handle ambiguity and open-endedness in the way a problem is defined and even in how you are expected to solve it.

Why Learn About Algorithms

Algorithmic thinking is part of computational thinking.  You might run into a situation where you have to program your own algorithms as solutions to problems.  Even if you never touch a line of code again, learning algorithmic thinking is useful.  Here’s why:
  • You build a mental model for how computers work.  This helps you choose the right tool for the job when you have to solve your own problems, and do a better job of troubleshooting when things go wrong.
  • The ability to write out an idea correctly and unambiguously transfers to the ability to write effective instructions or arguments in essays and other documents.
  • To think algorithmically is to be able to specifically translate a problem into something the computer can solve, whether you use Python, Excel, SBSS, or some other tool to actually solve the problem.

Why Learn How to Code

This is a big one, obviously. Being able to solve problems with code means you can tackle problems that Excel and other programs can’t help you with (for example, text-based problems).  It also means that you have full control over the solution, giving you the ability to customize it to suit your needs exactly.

Here are some general reasons to learn how to code:
  • Writing some code is the best way to understand concepts that can be applied elsewhere, like if statements and while loops.  It is also the most precise form of algorithmic thinking.
  • If you know how to code, you have the power to be endlessly creative.  From interactive fiction to web apps to computational art, there’s a lot you can do with code that is difficult or impossible without it!
  • Writing simple programs can help you automate the really boring parts of using a computer.
  • If you have an idea, you don’t have to wait for someone else to create it. You can do it yourself!
  • If you can put a knowledge of programming together with whatever it is you are studying, you become extremely valuable to that industry.  High paying jobs that few people can do well become open to you.

Here are some real example problems that can best be solved with code:
    • Rescaling climate change data to analyze it in new ranges
    • Text analysis by making a concordance of a text
    • Digitizing horizon shading (when does the sun rise/set behind mountains, local rocks, trees, ...)
    • Removing noise from measurements of snow height made by an ultrasonic sounder
    • Facilitating collection and analysis of data from an experiment that determines whether seeing the sign of a simple mathematical equation before the numbers gives someone an edge in solving that problem quickly

    These are some answers I got on Twitter when I asked “Why do you think an arts/social science student should learn to code? Reply with your reason, be it fun or practical, general or specific.”
    • “Same reason a CS student should learn from the arts: a different perspective is aways [sic] 'a good thing'.”
    • “Social science - 1 word, data. Arts - creativity.”
    • “So they know enough about the difficulty of software dev that, if elected, they don't do a http://healthcare.gov”
    • “because Robert A. Heinlein: http://www.elise.com/quotes/heinlein_-_specialization_is_for_insects
    • “the world is increasingly complex, and built, more every day, in code. being unable to understand basic science or software will soon be nearly as self-limiting as lacking numeracy or literacy is for many people now.”

    November 04, 2013 09:19 PM

    October 28, 2013

    Gail Carmichael

    Way back in the springtime I signed up for a Coursera offering on video games and learning.  I had no idea when the course would actually be offered, and forgot about it until they finally, around the end of September, announced that the course would be beginning shortly.  Right in the middle of my first term of full-time teaching.  A term in which I have 700 students.  Talk about timing!

    Despite the possibility that I couldn't give this course as much attention as I'd like, I decided to give it a try anyway.  It's an area I'm interested on a personal and research level, and if nothing else, I figured the videos should be interesting.

    So far, so good in that regard.  I was excited to see so many familiar faces in the lectures and concept videos.  They aren't people I know personally, but whose work I've been following for some time.  The topics have been interesting, and I really enjoyed seeing the Games Learning Society lab space (totally a place I could see myself working).

    I've consistently been about a week behind the lecture and assignment schedule, so I often miss out on the more timely discussion in the forums.   I'm not sure it matters much in my case, though, since I don't have a huge amount of time to dedicate to interacting with other students anyway.

    One question that's fair to ask is whether I've actually learned anything from the course so far.  Honestly... I'm not sure.  Because it's an area I've been watching for a while now, I probably know most of the basics already.  I also can't remember many of the specifics of what was covered in the lecture-style videos (they are very, very hard to focus on, unlike the animation-supported concept videos).  That said, it is nice to have the review and to think about new things via the assignments.

    My experience with this, my first MOOC, has been good enough that I signed up for another one that's more directly related to my thesis project: The Future of Storytelling.

    October 28, 2013 10:37 AM

    October 21, 2013

    Inkscape Tutorials

    It has been a long wait for the next version of Inkscape. The last major release of Inkscape was over 3 years ago back in August 23, 2010. And since then, the inkscape developers have been hard at work adding a multitude of new and awesome features to our favourite open source vector graphics editor.

    However, the question that most people ask is when is the next version of inkscape being released? About a month ago, after a long-standing blocker was resolved,  Inkscape developer Martin Owens asked this question on the inkscape-devel mailing list. The basic consensus on the mailing list was that all the important blocker bugs (the count was 10 in September) needed to be resolved before the release process could even start.

    Now, a month later, after a busy month, the awesome inkscape developers have whittled this down to 3 blockers. Martin writes on fedora-devel:

    Hey Devs,
    This is the bi-weekly report on our release-hope goal:
    Blockers: 3 
     * High #1163449 Imported bitmap appear blurry when zoomed in
     * Medium #953992 Imported pattern fill disappers while transforming
     * Medium #1005892 Patterns applied to text objects are blurred
    If you can fix one of these, please do. We can use all the help to
    debug, locate the errors causing these regressions and fix them. These
    blockers are high priority for our project goals.

    So, we are inching ever closer to an Inkscape release!

    October 21, 2013 02:12 PM

    Gail Carmichael

    You may remember hearing about a project I've been involved with for the last couple of years.  We're working on a book about computer science designed for beginners; something that could be used, for example, in my "introduction to computers for arts and social sciences" class.  Well, we've finally got two chapters ready for review, and would love to get your feedback on how we're doing so far.

    Note: If you're a beginner in the world of computer science, even better!

    The first sample chapter is on Data Representation. This is the first chapter from Part I of the book, which covers computing fundamentals.  The second chapter is on Artificial Intelligence. This is one of our in-depth subject areas and builds on concepts introduced in basic chapters.  It will appear in Part II of the book, which surveys some of the major fields found within computer science.

    If you're interested in helping out, you can review either one of the chapters, or both.  There is a short survey to fill in about the chapters.  We also intend to publish a list of our reviewers, should you wish to have your name included.

    If you're interested, please contact me, and I'll send you all the information and links you need.  (If you've left your email with us in the past, and haven't heard from us yet about this review opportunity, you probably will.  Please still feel free to contact me directly now.)

    October 21, 2013 11:00 AM

    October 17, 2013

    Gail Carmichael

    I gave a talk at this year's Grace Hopper on what I've been working on for my thesis project:

    Coherent Emergent Stories in Video Games
    Crafting satisfying narratives while preserving player freedom is a longstanding challenge for computer games.  Many games use a quest structure, allowing players to experience content nonlinearly.  However, this risks creating disjointed stories when side quests only minimally integrate with the main story.  This talk introduces the problem of nonlinear storytelling in games and discusses our flexible, scene-based story system that reacts dynamically to the player’s actions.

    My slides are embedded below and you can learn more on my website.

    October 17, 2013 10:03 AM

    October 12, 2013

    Inkscape Tutorials

    This is the next in the extensive series of tutorials from the fantastic 2D Game Art for Programmers blog. In this tutorial, Chris builds on his previous tutorial on gradients by explaining how to draw this awesome aquarium-type scene


    October 12, 2013 04:49 PM

    October 09, 2013

    Gail Carmichael

    Maria Klawe (Harvey Mudd College, far left), Brenda Laurel (Purple Moon, far right), and Kim Surkan (MIT) gave an insightful panel about the images of geeks in the media.  In some ways, I didn't learn much new, but I liked hearing about their personal experiences and getting new language to talk about the problem with.

    For this post, I'd like to share some of my (mostly raw) notes from the session.

    Maria's Part
    • no progress made in changing the image of professionals in the media
    • is a believer of failure
    • "people listen to you more" when you have gray hair
    • remembers a time when there were very few female doctors and lawyers
    • in the 70's shows depicted both male and female doctors and laywers (though not in the same show), and this caused flood of women into these professions.
    • more recently: forensic crime shows caused influx of women studying the field, even though job opportunities for forensic science and CS are at opposite ends of the spectrum
    • it's not just about tech women (problem with portrayal of all women, and of tech guys as well)
    • in the mid-90's, she was seated at dinner beside NBC exec responsible for Sat night movie series; said we needed shows about scientists and engineers; he said nobody knew any engineers in real life so wouldn't relate!
    • tried to write a pilot episode but saw halfway through it was going nowhere (too unrealistic)
    • someone wrote a pilot for a show called Rush about Silicon Valley start-up trying to win the DARPA challenge; she sent it out to 20 people with connections in the media; everyone loved it; but it went nowhere!
    • optimistic but doesn't know what else to personally try
    Brenda's Part
    • looking at the GHC poster from last year: not geeks, wearing nail polish; white woman in the middle giving advice to the black woman, asian woman starting into space (did a photoshop to fix this)
    • Numbers proves it's possible
    • we are responsible for our own representations ("I like the way we look!") 
    • "put out our own self-representations"
    • "deny power to the spectacle"
    • "do good work and get noticed for it"
    • check out http://femtechnet.tumblr.com and Wikipedia storming
    Kim's Part
    • media consumption is growing (2010: average 7 hours and 38 minutes) 
    • stereotypes of women being bad at math, as STEM fields being boring and unfulfilling
    • it's hard to notice what's not there, but when it isn't, you begin to associate the idea, for example, that all doctors are men, white, etc...
    • only computer science is declining in females, not other STEM fields
    • was not always this way; women were active in programming (e.g. ENIAC)
    • nerd stereotype most common explanation for low female participation
    • sexism in CS culture (especially gaming): recruitment, hackathons, sexual harassment/rape culture, lack of role models

    October 09, 2013 08:12 PM

    October 07, 2013

    Gail Carmichael

    I love the curriculum that Zoe Wood and Julie Workman created for their school's CS0 course and that they spoke about at GHC13.  It uses Processing, like the CS1 course that I'm currently teaching for non-majors, but focuses solely on the idea of computational art for its context.  My course has a bigger variety of problems to introduce concepts, but that's not necessarily a better thing. I do like their course's focus.

    Although the hope is that some of these students continue on in CS, this course is not as in-depth as a full-fledged CS course.  Some of the outcomes include students understanding that computers process commands one at a time, commands must be precise, variables allow for flexibility, functions allow simple concepts to be combined into complex programs, and playing is ok! (I hope my students walk away with that last one especially.) The curriculum embodies basic computational thinking, basic programming skills, working in teams, learning basic college skills, and enjoying computer science.  It covers shapes and 2D coordinates, colours, interactivity, animation basics, geometric shapes (implicit and parametric), images (arrays and pixels), and particle systems (classes).

    The five course projects really inspired me.  I loved how flexible they are, and how interesting the demoed results were.  These are the project topics:
    • Chuck Close, up close (each student makes one pixel, group puts them all together)
    • self portrait of social interaction (every mouse click shows visually how student feels)
    • self portrait (get a photo of themselves, do image manipulation, and implement hot spots that have different responses) 
    • tell a story (computational animation)
    • interactive montage with a 'journey home' theme (done in teams)
    Zoe and Julie emphasized just how fun the course is to teach, but also shared its success in terms of increasing female participation.  In four years, they went from 9% to 21% women!

    I've already been leaning toward Processing as a better choice for a first language as compared to Python.  The experience shared in this talk along with my own comparison of teaching both languages this semester is solidifying my view on this.  Python is a great early language, but I still prefer Processing first, especially for its potential to engage non-traditional students.

    October 07, 2013 12:25 PM

    October 03, 2013

    Gail Carmichael

    As the opening keynote here at GHC reminded us, computer science has a supply problem.  The number of people we need to create technology is increasing at a much faster rate than students taking computer science in schools.  The Exploring Computer Science and Computer Science Principles projects are aiming to help fix that.

    At a panel discussing the two projects, we learned why they matter and how they work.  CS Principles is an advanced placement (AP) course for high schools that is currently in pilot mode.  (AP classes, for the non-Americans like myself, are like college level classes taught to high school students in exchange for college credit later on.)  On the other hand, Exploring CS is intended as a high school level class taught to high school students.

    Both take an approach to teaching computer science that is dear to my heart.  They want to show why computer science is interesting and relevant; students should "learn how computer science is used as a lever to move the world."  They do it not through typical lecture-based styles of teaching, but through inquiry, offering interesting problems that engage students.  Exploring Computer Science is described as student centred, collaborative, and inquiry based — a very powerful combination!

    The goal is not to teach coding, but computational thinking.  For example, CS Principles centres around several big ideas including creativity, global impact, abstraction, the Internet, and more.  It does make use of fixed-response questions as assessment, but it also has performance tasks that give much more flexibility to students. This really gives some insight into the kind of "content" delivered.

    It's this kind of philosophy that I was inspired by when creating my version of our "Introduction to Computers for Arts and Social Students" course.  Of course, with 440 students in a huge lecture hall, the kinds of in-class activities and assessments is somewhat limited.  Even still, I could take this course's design so much further than I have so far, and hope I get the chance to in the future.

    I'd also like to push my outreach teaching and curriculum to the next level.  As I do, I should take heed of the advice given by the panel in response to an audience question: If you are a non-profit (like Girls Who Code, for example), and you are considering using these curricula, start by talking with teachers.  They know how to engage a group of high school students and teach them effectively.

    October 03, 2013 01:09 PM

    Today was our first full day in Minneapolis for this year's edition of the Grace Hopper Celebration of Women in Computing.  It's so nice to live the conference through the eyes of the students I organized to get here, seeing as it's their first time at GHC.  It's also nice to meet up again with all my women-in-computing friends that I rarely get to see outside of the conference.

    Things got started with a nice welcome session that happened to include me going on stage with my co-chair for the Communities Committee, Charna, to get recognized for our efforts.  The opening speakers (including student members of the ABI boards!) also gave some great advice for newcomers.

    Then came a highlight of our day: the keynote / plenary session that featured Telle Whitney (CEO of the Anita Borg Institute), Sheryl Sandberg (COO of Facebook), and Maria Klawe (President of Harvey Mudd College).  There was a lot of really good, frank talk about gender issues and facing them head on.  Karla, a GHC Communities Volunteer, blogged about this session - check it out!

    There were a bunch of sessions in the afternoon, but I couldn't pay full attention to them because they were either full or I worked on my class's assignment to be released on Friday instead.  I did have fun presenting my Gram's House poster at the poster fair, making some really great connections with potential collaborators.

    I'm really pumped to see everyone again tomorrow (for longer!), the sessions, my own talk, and most of all, the dancing!

    October 03, 2013 12:25 AM