Max KohlerMax Kohler – Visual Communication2022-10-17T00:00:00Zhttps://maxkohler.com/Max Kohlerhello@maxkohler.comAdopting a Digital Typeface for Letterpress Printing2016-06-27T14:01:32Zhttps://maxkohler.com/posts/2016-06-27-letterpress-font-on-student-budget/<p>For my most recent uni project I designed a display typeface and adopted it for traditional letterpress. Since I only had four weeks to do it, I developed process that is cheap, easy to execute and only uses basic tools and equipment.</p>
<h2 id="material">Material</h2>
<ul>
<li>3mm Acrylic (About £16 for an A2 sheet)</li>
<li>18mm MDF board (about £7)</li>
<li>3mm Greyboard for padding (about £2)</li>
<li>PVA (about £5)</li>
<li>Pencil, Masking tape, metal ruler</li>
</ul>
<h2 id="tools">Tools</h2>
<ul>
<li>Laser Cutter</li>
<li>Bandsaw (Alternatively an electric Jigsaw)</li>
<li>Illustrator CC</li>
</ul>
<h2 id="drawing-the-typeface">Drawing the Typeface</h2>
<p>I assume at this point you have either drawn or picked out the typeface you’ll adapt for letterpress. Generally simpler, solid shapes will be easier to work with.</p>
<p><img src="https://maxkohler.com/assets/laser-type-sample.svg" alt="Type specimen" />
<em>For my project I drew this 19th century-style display type.</em></p>
<p>If you're using typeface with very thin strokes or sharp serifs it might be worth doing a test on the laser cutter. You might have to increase the scale or pick a heavier weight to get a decent quality cut. Once you're sure the typeface will work, you can start preparing the alphabet to be cut.</p>
<h2 id="getting-ready-for-the-laser-cutter">Getting ready for the Laser cutter</h2>
<p><img src="https://maxkohler.com/assets/laser.png" alt="Letterforms ready to be cut" />
<em>This is one of two vector files I sent to the laser cutter.</em></p>
<p>Laser cutters usually work with standard Illustrator files (but make sure to check with your technician). Make sure you convert the letters to outlines before you send them off.</p>
<p>Since acrylic sheet isn’t exactly cheap, you want to try and fit as many letterforms as possible on one sheet. There is <a href="http://svgnest.com/">software to do this</a>, but it can take a lot of processing power to get to a decent result.</p>
<p>I ended up just setting an Illustrator artboard to the size of my acrylic sheet and arranging the letters by hand. Once that’s done, remember to flip the letterforms - they will come out the right way once you print them.</p>
<p><img src="https://maxkohler.com/assets/laser-shadow.png" alt="letters with bridges" />
<em>Highlighted sections were added so the letters stay together in one piece.</em></p>
<p>If your typeface has disconnected parts - like this <a href="https://goo.gl/photos/re4WpUwtbGJu9Euo7">shadow variant</a> I made - you might want to consider adding some “bridges” so they won’t fall apart when they come out the laser cutter. Once the type is mounted you can easily file some material off the bridges so they won't show up in print.</p>
<p><img src="https://maxkohler.com/assets/laser-letters.JPG" alt="finished letters" />
<em>Letters fresh out the laser cutter, ready to be mounted.</em></p>
<p>Get your file to the laser cutter, start the machine and wait! In my case it took about an hour to complete 30 letterforms. If your acrylic sheet has protective film on it, it's best to leave it until you're ready to print.</p>
<h2 id="mounting">Mounting</h2>
<p>For our letters to be usable in letterpress (together with regular letterpress type) we’ll have to bring the face up to 23.3mm - this is called <a href="https://en.wikipedia.org/wiki/Movable_type#Type-founding">type high</a>. (I should say that if you're outside the United Kingdom, the US or Canada you might need a different height - check with your letterpress technician.)</p>
<p>To get to that height, we’ll stack a few different materials: Our 3mm acrylic letters, 18mm MDF, 3mm Greyboard and some scrap paper will bring us close enough to 23.3mm. We can always adjust the press to compensate as well.</p>
<p><img src="https://maxkohler.com/assets/laser-table.jpg" alt="letters ready to be mounted" /></p>
<p>First, lay out all the acrylic letters on the MDF board. Draw a rectangle around each letter, but leave about 2mm of space on each side. This is to compensate for the material we’ll lose on the bandsaw.</p>
<p>Once you have all the outlines, make sure you indicate which rectangle belongs to which letter. Then take the letters off cut out the MDF blocks on the bandsaw.</p>
<p>Next, glue the acrylic letters on the right MDF blocks - I found that regular PVA works fine, but contact cement would probably be a bit more permanent. Let the glue set overnight before you peel the protective film off the letters.</p>
<p><img src="https://maxkohler.com/assets/laser-assembly.gif" alt="mounting the letters" /></p>
<p>You could repeat this process and stick a piece of greyboard underneath each letter to get close to type high. Since I had limited time, I just laid on big sheet of greyboard on the bed of the printing press.</p>
<h2 id="printing">Printing</h2>
<p>From this point, it’s just normal letterpress! Note the crisp edges and minimal texture the acrylic creates - almost like type metal. MDF or plywood (which can also be lasercut) give you a slightly more textured look.</p>
<p><img src="https://maxkohler.com/assets/laser-letterpress.jpg" alt="letterpress" />
<img src="https://maxkohler.com/assets/laser-print.jpg" alt="letterpress" />
<em>One of the first type samples I printed. <a href="https://goo.gl/photos/xoFBqmSPWrnuHCfm9">Here's some more</a></em></p>
<p>Like I said at the start, this process is by no means perfect. But I hope it can be a starting point for your project - if you decide to try this, let me know! I'd love to hear what you learn along the way.</p>
Chelsea Contemporary Typography2016-07-02T14:01:32Zhttps://maxkohler.com/posts/2016-07-01-summer-chelsea-typography/<h2 id="week-one">Week One</h2>
<p><img src="https://maxkohler.com/assets/chelsea-skeleton.jpg" alt="Skeletons" /></p>
<ul>
<li>Established some general type history, terminology and classification. Turns out everyone was pretty much up to speed already coming but certainly doesn't hurt to be reminded.</li>
<li>Talked about type coming from handwriting</li>
<li>Did some drawings to see how different typefaces are very much defined by their underlying skeletons and how negative space is used in various letterforms.</li>
<li>Set up a shared tumblr to collect research for the duration of the course</li>
<li>Talked about different ways of describing characteristics of a typeface. Turns out you can describe these things on scales</li>
<li>Took Open Sans and modified it to move it to the opposite end of a given scale. I moved it away from "grown-up" towards the "childish" end of the scale by cutting up the letterforms and reassembling them into abstract structures - like I used to take lego kits apart and reassemble them into something they were never intended to be (mostly guns)</li>
</ul>
<h3 id="particitype-workshop-with-ollie-and-sam">Particitype Workshop with Ollie and Sam</h3>
<ul>
<li>They're Camberwell folk who graduated 3 years ago and now run a studio</li>
<li>Talked about a typefe evolving and changing during the course of its lifetime (they had this idea of having a typeface that changes everytime someone looks at the website)</li>
<li>They did this event called particitype in their third year at camberwell where they streamed themselves and volunteers creating a full typeface over the course of a day. Viewiers on the stream gave them instructions as to what tools, colours and techniques to use for each letter. I think if they were to do this again on twich it would probably take off (seeing as "twitch plays pokemon", "twitch plays darksouls two" were wildly popular "twitch does type design" could be equally successful)</li>
<li>Talked about how sometimes design can mean bringing other people into the process, abandoning control can be a good thing.</li>
</ul>
<h3 id="workshop-results">Workshop results</h3>
<ul>
<li>Created a total of three seperate character sets, each time creating narrowing down the set of instructions.</li>
<li>First typeface: Pull a card with a set of random instructions out of a hat, use whatever tools you like or the instuctions dictate</li>
<li>Second: Still follow random instructions, but use coloured tape for straight lines and either a blue paint roller or a red paintbrush for curves. Also each letter has to have serifs. Also all uppercase.</li>
<li>Third typeface: At this point we sort of quietly abandoned the random instructions. Use a wiggly blue roller line for straights, bits of tape for curves and you must incorporate at least one stencilled red serif into each letter. This time all lowercase.</li>
</ul>
<h2 id="week-two">Week Two</h2>
<h2 id="week-three">Week Three</h2>
GraphicsMagick Recipes2016-09-14T10:01:32Zhttps://maxkohler.com/posts/2016-07-23-graphicsmagick-image-processing/<p>Having to scale a bunch of images to three different sizes while also changing the file type is something that's been coming up in my work lately. This is easy enough to do in Lightroom, but it feels icky to fire up a massive piece of software just for one simple task. GraphicsMagick does things like rotate, scale and convert images just as well, but with a much smaller footprint.</p>
<p>I'll keep using Lightroom for more advanced photo editing, but for the simple stuff Graphicsmagick is great.</p>
<h2 id="(this-is-a-command-line-thing)">(This is a command-line thing)</h2>
<p>If you're already comfortable with the command-line skip right ahead. If not, stick around - it's really not that hard to use. Jim Hoskins has a <a href="http://blog.teamtreehouse.com/introduction-to-the-mac-os-x-command-line">very good introduction to the Mac OS X Command Line</a> on Treehouse. Once you read that you'll be ready to follow along with the rest of the article.</p>
<p>(If you're on Windows like myself, read the Treehouse article anyway. The Windows command-line is fundamentally the same thing as the Mac OS X command-line - you'll figure it out pretty quickly.)</p>
<h2 id="let's-install-graphicsmagick">Let's install Graphicsmagick</h2>
<p><strong>On a Mac</strong> the easiest way to install Graphicsmagick is through <a href="http://brew.sh/">Homebrew</a>. Homebrew is a command-line app that makes it easier to install other command-line apps. Once you've got it set up, run the following command to install Graphicsmagick:</p>
<pre class="language-batch"><code class="language-batch"><span class="token command"><span class="token keyword">brew</span> install graphicsmagick</span></code></pre>
<p><strong>On Windows</strong> grab the <a href="http://www.graphicsmagick.org/index.html">latest version from the project website</a> and follow the instructions.</p>
<p>Once the setup is complete, open a new command line and type <code>gm</code> (short for Graphicsmagick). If everything is set up correctly you should see the following result:</p>
<pre class="language-bash"><code class="language-bash"><span class="token operator">></span> gm
GraphicsMagick <span class="token number">1.3</span>.24 <span class="token number">2016</span>-05-30 Q8 http://www.GraphicsMagick.org/
Copyright <span class="token punctuation">(</span>C<span class="token punctuation">)</span> <span class="token number">2002</span>-2016 GraphicsMagick Group.</code></pre>
<p>And you're good to go! Here's some examples of things to do:</p>
<h2 id="resize-a-folder-of-images">Resize a folder of images</h2>
<pre class="language-batch"><code class="language-batch"><span class="token command"><span class="token keyword">gm</span> mogrify -output-directory your-output-folder -create-directories -resize 400x200 *.jpg</span></code></pre>
<p>Let's look at this one bit at a time.</p>
<ul>
<li><code>gm</code> is short for GraphicsMagick. <code>mogrify</code> is the command we're using - it handles scaling, resizing and other basic transformations.</li>
</ul>
<p>Next, we'll pass a number of arguments to <code>mogrify</code> that tell it exactly what to do.</p>
<ul>
<li><code>-output-directory your-output-folder</code> specifies a folder where gm will save the resized images. If we didn't do this, gm would overwrite the source files. <code>-create-directories</code> tells gm to create the output directory if it doesn't exist yet.</li>
<li><code>-resize 400x200</code> is what triggers the actual resizing. GM will resize each image so that it fits within those dimensions - so the resized images will be <em>at most</em> 400px wide and <em>at most</em> 200px tall. If you want to resize an image to exact dimensions (and possibly strech it in the process) use <code>-resize 400x200!</code> (with and exclamation mark).</li>
<li><code>*.jpg</code> defines the source images we're working with - in this case any JPG image in the current folder.</li>
</ul>
<p>This <code>gm [command] [arguments] [source]</code> structure remains largely the same regardless of which command you're using. Here's some more examples:</p>
<h2 id="convert-a-folder-of-images-to-a-different-format">Convert a folder of images to a different format</h2>
<pre class="language-batch"><code class="language-batch"><span class="token command"><span class="token keyword">gm</span> mogrify -output-directory output -format png *.jpg</span></code></pre>
<p>GM will convert pretty much any image file into anything you could think of - the <a href="http://www.graphicsmagick.org/GraphicsMagick.html#desc">list of supported file types is impressive</a></p>
<h2 id="create-an-animated-gif-from-a-folder-of-images">Create an animated gif from a folder of images</h2>
<pre class="language-batch"><code class="language-batch"><span class="token command"><span class="token keyword">gm</span> convert -delay <span class="token number">100</span> *.jpg animation.gif</span></code></pre>
<p><code>-delay</code> defines the delay between each frame of the animation in milliseconds.</p>
<h2 id="generate-a-grid-from-a-folder-of-images">Generate a grid from a folder of images</h2>
<pre class="language-batch"><code class="language-batch"><span class="token command"><span class="token keyword">gm</span> montage -tile 5x5 -geometry 250x250+<span class="token number">5</span>+<span class="token number">5</span> *.jpg grid.jpg</span></code></pre>
<p><code>-tile</code> specifies how many columns and rows the montage should have.
<code>-geometry</code> defines the dimensions of each individual image in the montage and the spacing around it - in this case 250px by 250x with 5px spacing on either side.</p>
<h2 id="rgb-to-cmyk-separations">RGB to CMYK Separations</h2>
<p>I've written a batch script based on <a href="https://stackoverflow.com/questions/32662618/need-to-generate-separate-cmyk-images-in-color-from-pdf">this Stack Overflow answer</a>:</p>
<pre class="language-batch"><code class="language-batch"><span class="token comment">REM Convert to CMYK</span>
<span class="token command"><span class="token keyword">gm</span> convert <span class="token variable">%1</span>.jpg -colorspace CMYK <span class="token variable">%1</span>-cmyk.jpg</span>
<span class="token comment">REM Invert</span>
<span class="token command"><span class="token keyword">gm</span> convert <span class="token variable">%1</span>-cmyk.jpg -operator All negate <span class="token number">1</span> <span class="token variable">%1</span>-cmyk.jpg</span>
<span class="token comment">REM Generate individual channels</span>
<span class="token command"><span class="token keyword">gm</span> convert <span class="token variable">%1</span>-cmyk.jpg -channel Cyan <span class="token variable">%1</span>-cyan.png</span>
<span class="token command"><span class="token keyword">gm</span> convert <span class="token variable">%1</span>-cmyk.jpg -channel Yellow <span class="token variable">%1</span>-yellow.png</span>
<span class="token command"><span class="token keyword">gm</span> convert <span class="token variable">%1</span>-cmyk.jpg -channel Magenta <span class="token variable">%1</span>-magenta.png</span>
<span class="token command"><span class="token keyword">gm</span> convert <span class="token variable">%1</span>-cmyk.jpg -channel Black <span class="token variable">%1</span>-key.png</span></code></pre>
<p>Usage:</p>
<pre class="language-batch"><code class="language-batch"><span class="token command"><span class="token keyword">rgbToCMYK</span>.bat myImage</span></code></pre>
<p>Where <code>myImage</code> is the filename <em>without the file extension</em>.</p>
<h2 id="write-the-filename-into-the-image">Write the filename into the image</h2>
<pre class="language-batch"><code class="language-batch"><span class="token command"><span class="token keyword">gm</span> mogrify -output-directory output -fill white -pointsize <span class="token number">25</span> -font Arial -draw <span class="token string">"text 10,30 '%t'"</span> *.png</span></code></pre>
<h2 id="the-coolest-thing%3A-you-can-combine-any-of-these-commands">The coolest thing: You can combine any of these commands</h2>
<p>This is the great thing about command-line tools like this: They don't make any assumptions about what you are going to use them for. So you could combine any of these commands (<a href="http://www.graphicsmagick.org/GraphicsMagick.html">and many more</a>) in any order you liked with just a few keystrokes.</p>
<p>As an example, you might want to create an animated gif from a folder of images but also scale the gif so you don't end up with a massive file. Just pass a <code>-resize</code> argument to <code>convert</code> and you're set.</p>
<pre class="language-batch"><code class="language-batch"><span class="token command"><span class="token keyword">gm</span> convert -resize 200x200 -delay <span class="token number">100</span> *.jpg animation.gif</span></code></pre>
<p>These are some of the ways I use GM in my work - let me know if you have any more suggestions!</p>
Let’s stop wasting our time on design competitions2017-01-30T10:00:00Zhttps://maxkohler.com/posts/2017-01-30-design-competitions/<p>Let’s look at some of the briefs that are part of my assignment. Here’s one from this year’s <a href="https://www.dandad.org/en/d-ad-new-blood-awards/">D&AD Young Blood awards</a>:</p>
<p><img src="https://maxkohler.com/assets/brief-mubi.png" alt="ad" /></p>
<p>“Whatever you do, drive sign up online” — Huh. Here’s two from <a href="http://www.ycn.org/awards/ycn-student-awards/2016-17-ycn-student-awards">this year’s YCN-Awards</a>:</p>
<div class="gallery">
<img src="https://maxkohler.com/assets/brief-metoffice.png" />
<img src="https://maxkohler.com/assets/brief-fedrigoni.png" />
</div>
<p>Notice anything about these? It sounds like they’re each asking for a very specific piece of design work that will directly benefit their business. A campaign to “drive signups online” for a streaming startup. A “Christmas gift that will be handed out […] in goody bags to promote our ‘Constellation’ range of papers” for a boutique paper company. Some “content for our social media channels” for the Met office. Sounds to me like the kind of specific, target-oriented work you would normally <em>hire a designer for</em>.</p>
<h2 id="design-competitions-are-a-polite-way-of-asking-for-free-work">Design competitions are a polite way of asking for free work</h2>
<p>But hiring designers is expensive, so why not get a bunch of students to do it for free? What would normally be an expensive design commission becomes a “creative challenge” — that sounds fun, right? And who would you ask for money if they’re having fun! And if you’re lucky enough to win, you will have the honour of seeing your design used in the real world — Just think of all the exposure you will get from that.</p>
<p>Except of course, the company doesn’t give a shit about your career. They’re here to minimise cost, and a design competition is a great way to do exactly that: They get not just one, but potentially hundreds of young designers to develop solutions to their business problem for free. If one of them happens to be good, they get to profit from it forever while you get nothing.</p>
<p>And it’s not like these companies can’t afford to pay people: Mubi, the startup looking for a campaign to drive online signup, is worth <a href="http://uk.businessinsider.com/mubi-indie-movie-streaming-startup-worth-125-million-as-it-moves-into-china">a hundred million pounds</a>. Fedrigoni, the paper manufacturer asking for a Christmas gift took <a href="http://www.fedrigoni.com/wp-content/uploads/2016/04/Fedrigoni-2015-CONSOLIDATED-FINANCIAL-RESULTS-.pdf">eight-hundred million pounds in revenue</a> in 2016. The Met office reported a revenue of <a href="http://www.metoffice.gov.uk/media/pdf/d/b/AR1415_Revised.pdf">two-hundred and twenty million pounds</a> in 2014. They don’t mention that in the brief, of course. Why? Because they’re here to screw you. They don’t care about your career, the only reason they participate is to get some free labour and move on.</p>
<h2 id="they-don%E2%80%99t-prove-you%E2%80%99re-a-good-designer">They don’t prove you’re a good designer</h2>
<p>If you’ve ever done freelance work, you know what it’s like: You get the client to sign your contract. You ask smart questions to pinpoint the problem they’re trying to solve. You speak to the people who will be using whatever you’re selling. You gather the data. New problems come up. You prototype, you test, you ask more questions until you arrive at a solution. You make the case for your solution to the client. <a href="https://www.youtube.com/watch?v=6h3RJhoqgK8">You get paid</a>.</p>
<p><em>That’s design.</em></p>
<p>Now, let’s see how a design competition works: You get handed a brief that has a lot of flowery language and few actual facts. You try to guess what the jury will want to see. You make a something that looks pretty. If you’re lucky you might get a voucher.</p>
<p>That, friend, has nothing to do with design.</p>
<p>Winning a design award proves that you made something that happened to appeal to a panel of <a href="https://www.dandad.org/profiles/jury/253209/dandad-jury-2016/">middle-aged white people</a>. It doesn’t prove you’re a good designer because, guess what, you didn’t design anything. You didn’t defend your idea to the client, you didn’t negotiate a budget, you didn’t iterate. You made something that looks pretty, for free.</p>
<h2 id="they%E2%80%99re-hurting-all-of-us-long-term">They’re hurting all of us long-term</h2>
<p>We’ve established that design competitions are a glorified way of asking for free work. That should be reason enough not to participate, but the long-term damage these competitions do to the industry goes further. By <a href="https://www.dezeen.com/2013/07/17/graduates-should-work-for-nothing-says-d-and-ad-chairman/">legitimising free work</a> as the way to establish yourself in the design industry, we’re shutting out talented people who simply can’t afford to work for free. What if you’re a young woman with a kid to take care of? What if you’re working class, your parents can’t support you and you’ve got rent to pay? What if you’re an immigrant who <a href="http://www.migrationobservatory.ox.ac.uk/resources/reports/the-minimum-income-requirement-for-non-eea-family-members-in-the-uk-2/">needs to have an income to be allowed to stay in the country</a>? These people are smart, and when they realise the design industry expects them to work for nothing they’re not going to stick around.</p>
<p>If we don’t do anything about this, we will end up with an industry that only represents the most privileged parts of society, and not the people who are going to be using with our work. We need to keep the design industry open to anyone regardless of their social background, gender or nationality if we’re going to stay competitive.</p>
<p>This is why it is important to insist on getting paid for your work even if you could afford not to: It hurts not just you but all of us (including the people asking for the free work) by making the industry less diverse, and thus less effective in the long run.</p>
<h2 id="%E2%80%A6-and-frankly%2C-we-don%E2%80%99t-need-them">… and frankly, we don’t need them</h2>
<p>50 years ago, doing unpaid work for a design competition might have been your only real shot at getting your name recognised — but we have the internet now. Chances are you have more people following you on Instagram than will ever come to an award ceremony. And the people who follow you online are doing so not because your name came up on a shortlist, but because they genuinely care about your work. They’re the ones that will come to your exhibition, they’re the ones who will buy your book, they’re the ones who will support you and your work long-term. Social media lets you engage with your audience on your terms, independently of anyone else’s platform.</p>
<p>The people running award ceremonies know all this, of course, and it terrifies them: The continuous stream of unpaid workers their ad agencies have relied on for 50 years is about to dry up. That’s why they’re trying everything to convince us (and our teachers) that their awards are still relevant — they’re not.</p>
<h2 id="what-can-we-do%3F">What can we do?</h2>
<p>Let’s stop participating. Spend the £20 it costs to enter the D&AD awards on some new materials instead and get to work. Share your work online and build a sustainable audience there. If a competition brief is part of your course, do the work but don’t submit it, and let your teacher know why. You and me have skills that are in demand everywhere, so let’s stop wasting our time on those trying to exploit us 🖕</p>
<p class="note">
This article was first published on <a href="https://medium.com/@maxakohler/lets-stop-wasting-our-time-on-design-competitions-fbaa2582dd79#.4gouvd2w1">Medium</a>
</p>Bridging2017-06-25T10:00:00Zhttps://maxkohler.com/posts/2017-06-25-bridging-notes/<h2 id="april-18-bridging-talk">April 18 Bridging Talk</h2>
<p>Content on the sheet are relevant but dates are for Illustration only.
Will be looking at Unit 7 and 8 Work, no target of people to get on ba
Get a merit on average (c-), get on the bridging unit
prepare a project to do through 3rd year, also dissertation
bridging unit outcomes: evaluation, proposal for 3rd year</p>
<p>sheena calvert running unit 9
s.calvert@csm.arts.ac.uk
look at some old dissertations
6000-8000
open subject
1/3 of the degree mark (unit 9)
dissertation is 75% of unit 9, professional portfolio is 25%
5th december dissertation handin
primary/secondary</p>
<pre><code> / \
/ \
-> ->
\ /
\ /
converge -> diverge
</code></pre>
<h2 id="friday%2C-june-22">Friday, June 22</h2>
<h3 id="show-ideas">Show ideas</h3>
<ul>
<li>Hardware based</li>
<li>Ollie's twitter printer (Like it bc it uses ancient hardware)</li>
<li>A drawing robot</li>
<li>An interactive machine</li>
<li>Going to buy a raspberry pi</li>
<li>Darkroom stuff</li>
</ul>
<h3 id="possible-unit-9-topics-(dissertation)">Possible Unit 9 Topics (Dissertation)</h3>
<ul>
<li>Smth design history related</li>
<li>Lost bahaus buildings (The young old) I'm a bit disappointed the dissertation only counts for 25%</li>
</ul>
<h3 id="things-you-think-you-are-good-at">Things you think you are good at</h3>
<ul>
<li>Programming</li>
<li>tech things</li>
</ul>
<h3 id="things-you-don't-want-to-do">Things you don't want to do</h3>
<ul>
<li>Books (esp process books)</li>
<li>Drawing / Illustration</li>
<li>Probably animation</li>
</ul>
<h3 id="what-scale-%2F-format-would-you-like-to-try-out">What scale / format would you like to try out</h3>
<ul>
<li>Mostly digital stuff, do things that work more in an exhibition context</li>
<li>I enjoyed doing the big format weather thing (taking stuff from the web into a large scale installation)</li>
</ul>
<h3 id="assignments">Assignments</h3>
<p>Make a mindmap about the dissertation</p>
<ul>
<li>Design History
<ul>
<li>Inncuous bauhaus buildings</li>
<li>Street Signage</li>
<li>Photo lettering</li>
<li>Technical drawing</li>
<li>Language as a design toool</li>
</ul>
</li>
<li>Automation
<ul>
<li>Machine Learning</li>
<li>Humans pretending to be machines / machines preteding to be human</li>
<li>New ways of interacting with machines (Chat, speech, AR)</li>
<li>Tech creating the service economy / Gig economy</li>
</ul>
</li>
<li>Design Ethics
<ul>
<li>Tech</li>
<li>Uber (Greyball)</li>
<li>Google being fined</li>
<li>Designing trumps border wall</li>
</ul>
</li>
<li>Spam robots</li>
<li>Design education</li>
<li>Design systems</li>
</ul>
<h2 id="wednesday%2C-june-28">Wednesday, June 28</h2>
<ul class="contains-task-list">
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> 1 Non-fiction text
<ul>
<li><a href="https://www.typotheque.com/articles/from_lettering_guides_to_cnc_plotters">From Lettering Guides to CNC Plotters</a></li>
</ul>
</li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> 1 Fiction text
<ul>
<li>The Hunter (Joe Sparrow)</li>
</ul>
</li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> 10 Images
<ul>
<li>9 Printouts + Google maps thing from the RCA</li>
</ul>
</li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> 1 Film
<ul>
<li>No Man's Sky</li>
<li><a href="https://cdn.arstechnica.net/wp-content/uploads/2013/04/thatdude-df-aboveground-640x385.png">Dwarf Fortress</a></li>
</ul>
</li>
<li>[] 1 Poem</li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> 1 Object
<ul>
<li>The medium format camera (Came out 1937)</li>
</ul>
</li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> 1 Interview / Article
<ul>
<li>Shit Town</li>
</ul>
</li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> 2 Exhibitions
<ul>
<li>RCA Grad Show (esp machine learning work)</li>
<li><a href="https://wellcomecollection.org/articles/images-and-objects-electricity/">Electricity @ Wellcome Collection</a></li>
</ul>
</li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> 3 Books
<ul>
<li>Findings on Light</li>
<li>True Print</li>
<li>Young Old: Urban Utopias of an Ageing Society</li>
</ul>
</li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> 3 Relevant people
<ul>
<li><a href="https://www.experimentaljetset.nl/250/">Experimental Jetset</a></li>
<li>David Farenthold (WaPo) <a href="https://www.washingtonpost.com/politics/a-time-magazine-with-trump-on-the-cover-hangs-in-his-golf-clubs-its-fake/2017/06/27/0adf96de-5850-11e7-ba90-f5875b7d1876_story.html?utm_term=.6ed1bdf81837">Article on Trump's fake Time cover</a></li>
<li><a href="http://www.joostgrootens.nl/">Joost Grootens</a></li>
</ul>
</li>
</ul>
<h2 id="thursday%2C-june-29">Thursday, June 29</h2>
<h2 id="jake-notes">Jake Notes</h2>
<p>Trees and ballpoint drawings are about labour
a machine doing it for you
people always ask how ""
the work of art in the age of mechanical reproduction
yuri suzuki
muaic, use of technology in music
rube goldberg machines
needlessly elaborate
news from nowhere, william morris</p>
<p>sam winston
birth day
abstract data stuff</p>
<p>machine drawings become about content
romeo and juliet is sort f the opposite, super irrational etc</p>
<h2 id="friday%2C-june-30">Friday, June 30</h2>
<h3 id="prp-talks">PRP talks</h3>
<h4 id="lettie">Lettie</h4>
<p>Mandalas created from a system
Ancient art, ancient cultures
Carl Jung
Pollock</p>
<h4 id="ben">Ben</h4>
<p>Dissertation on board games on the silk road</p>
<h4 id="margot">Margot</h4>
<p>feminism and censorship of female bodies
in the past the curch used to dictate what people were allowed to show, now it's sillicon valley
dissertation on the same thing
how it used to be in art in the past vs now on tfhe internet
Original 60s feminism vs our modern view on things</p>
<h4 id="luke">Luke</h4>
<p>Public spaces and human behaviour
hospitals always have the same lighting, can't tell wether it's day or night
fear of empty spaces
andreas gursky</p>
<h3 id="research-questions">Research Questions</h3>
<ul>
<li>Forgotten Bauhaus Buildings in Southern Germany</li>
<li>Uber and the quetion of ethics in interaction design</li>
<li>How does image recognition software reflect biases in tech</li>
<li>How does interaction design shape the modern service economy</li>
</ul>
<h3 id="assignments-1">Assignments</h3>
<p>Write a 400 word or more dissertation proposal
This can be bullet points, more or less expanded
Based on the questions, do some initial research</p>
<h3 id="dissertation-proposal">Dissertation Proposal</h3>
<p>Initial interest came up when I learned theres a number of buildings in my hometown by Hermann Blomeier - a student of Mies van der Rohe's at the bauhaus (an one of its last graduates). Most of these were built in the 60s, when the economy had started to recover. The list includes a rowing club, a ferry terminal and a water treatment plant. (There seems to be a small amount of academic research on this, but I havent' been able to source the bulk of it yet)</p>
<p>I'm interested in the less high-profile bulidings designed by baauhaus-trained architects. It's generally accepted that few designs coming out of the bauhaus ever made it to mass production - Breuers steel tube chairs being one of them. However this doesn't seem to be the case for architecture - even while the bauhaus was still active, teachers and graduates completed a number of projects. There are of course the Barcelona-Pavillion, the Haus am Horn and the bauhaus building itself, but van der Rohe, Gropius and others also built a number of factories, housing estates and municipal buildings (Engels, Meyer 2006).</p>
<p>There doesn't seem to exist a proper database of bauhaus buldings, the closest I could find was the book by Meyer and Engels - potential for some primary research over the summer.</p>
<p>An interesting aspect might be to see ow these buldings were judged, used and changed over time. This varied considerably between east and west Germany - While the communist government held up the bauhaus buildings for their economic use of materials and futuristic outlook, the west regarded them as "an episode to be got over" (Birkhauser, 1998). The buildings have also been modified over time - the rowing club in my hometown bears little resemblance to the original drawings. It would be interesting to find out if mid-century modernism is seeing a similar revival as brutalism is in this country.</p>
<p>greenwhich town hall
from bauhaus to our house
pessac worker bulidings
by le corbusier
people changing the buildign as soon as they were sold, putting up flower pots etc, painting the walls</p>
<p>anna ridler</p>
<p>email sheena
proposal
article on people pretending to be robots
images of rowing club
do presentation on evaluation and proposal on both dissertation and ips
draft of the presentation ready by monday 17th
write dissertation 500 words, will be printed and people wll annotate each others work</p>
Teaching machines to draw2017-10-01T10:00:00Zhttps://maxkohler.com/posts/2017-10-01-teaching-machines-to-draw/<h2 id="november-28%2C-2017">November 28, 2017</h2>
<p>Turns out my steppers and drivers worked just fine, I just didn't have enough voltage. 9V battery does the trick. I also found out my first EasyDriver is in fact perfectly fine, which is good news.</p>
<h2 id="october-3%2C-2017">October 3, 2017</h2>
<p>Managed to solder pins on my EasyDrivers, they fit nicely into the breadboards now. Hopefully I didn't cook any of the components.</p>
<p><img src="https://maxkohler.com/assets/unit-10/driver.jpg" alt="Driver" />
BA Graphic Design level soldering</p>
<p><img src="https://maxkohler.com/assets/unit-10/arduino.jpg" alt="Driver" /></p>
<h2 id="october-15%2C-2017">October 15, 2017</h2>
<p>I found out you can get AutoCAD for free as a student, so I used that to draw up the shaft supports I need to build. What a good piece of software. (I can't believe I ever thought it was a good idea to do this in Illustrator)</p>
<p><img src="https://maxkohler.com/assets/unit-10/mounting.png" alt="Mounting" /></p>
<h2 id="october-17">October 17</h2>
<p>Bought some more components to help with the wood:</p>
<ul>
<li>1 x 16mm Auger bit (To drill a hole that will take a ball bearing)</li>
<li>1 x 8mm General purpose bit (to make supports for the slide shafts)</li>
<li>Mounting brackets</li>
<li>Machine screws</li>
</ul>
<h2 id="october-18%2C-2017">October 18, 2017</h2>
<p>Had a good talk with a technician at Camberwell today concerning cutting my MDF (shouldn't be a problem) and drilling holes for my shaft supports (might be a problem). Turns out the 40cm drill bit I bought yesterday is going to be useless - you can chuck it in a drill press, but because it's so long it will wobble and make it impossible to drill a precise hole. The workshop has some special drill router bits that should work better.</p>
<p>Cutting the 5mm steel rod isn't a problem, he says. I'm starting to think having two shafts at the end might be useful, it allows me to tighten each timing belt seperately (but then again I'm increasing friection and I could probably cut both belts to the same lengths with reasonable precision)</p>
<p>Workshops are closed Wednesday afternoons, and they're doing inductions Thursday - So I'll be cutting my MDF Friday afternoon.</p>
<p>Also I held some screws up agains things and found out that I do have the correct machine screws to go in my linear bearings, and my wood screws will fit in my angle brackets.</p>
<p><img src="https://maxkohler.com/assets/unit-10/hardware.jpg" alt="Parts" /></p>
<h2 id="october-20%2C-2017">October 20, 2017</h2>
<p>Got my MDF cut to size - turns out I couldn't get 15 pieces because you loose 3mm with each cut (the width of the saw blade). I managed to put together a version of the slide mechanism using MDF blocks and mounting brackets.</p>
<p><img src="https://maxkohler.com/assets/unit-10/mdf-shaft-support.jpg" alt="MDF slide mechanism" /></p>
<p>Because the rods are so long, there is quite a lot of springy-ness to them - hoping this won't be too much of a problem since there isn't going to be much weight on them.</p>
<h3 id="problems%3A">Problems:</h3>
<ul>
<li>It's difficult to get the MDF blocks to sit straight using mounting brackets - it always pulls them in one direction or the other</li>
<li>MDF doesn't work too well as a shaft support - the rods move around in the holes, causing the slide to get stuck.</li>
</ul>
<p>I've ordered some machine-made <a href="https://www.gearsandsprockets.co.uk/pillar-shaft-support-mount-for-linear-guide-rails-8mm-sk8uu.html">aluminium shaft supports</a> which should solve both problems at the same time.</p>
<p>Hoping to install the drive shaft (with temporary support) and do a test with the motor by the end of next week.</p>
<h2 id="october-25%2C-2017">October 25, 2017</h2>
<p>The aluminium shaft supports arrived:</p>
<p><img src="https://maxkohler.com/assets/unit-10/metal-shaft-suport.jpg" alt="Aluminium shaft supports" /></p>
<p>With them installed, everything seems much more stable. There's also the added benefit that the slide sits much lower over the table surface, which means the eventual pen won't be too far from the paper. The aluminium supports are also much lighter than the MDF ones. Still, the slide doesn't run perfectly smooth, but I'm hoping some small adjustments and a bit of oil will solve that.</p>
<p><img src="https://maxkohler.com/assets/unit-10/driveshaft-mount.jpg" alt="Driveshaft mount" /></p>
<p>Following my disillusionment with the MDF shaft supports, I got an aluminium part to take the driveshaft. A few layers of paper to bring it to the height of the motor shaft.</p>
<p>I came up with this arrangement to mount the timing belts:</p>
<p><img src="https://maxkohler.com/assets/unit-10/belt-mount.jpg" alt="Driveshaft mount" /></p>
<p>It consists of two corner braces, an M5 screw and some nuts, all of which I had already. However I doubt this will be stable enough to support the belt once it's under tension.</p>
<p>I'm mounting the belt as close to the linear bearing as possible, so there's the least amount of leverage to get it stuck.</p>
<h2 id="october-31%2C-2017">October 31, 2017</h2>
<p>I came up with this arrangement to attach the sliding platform to the timing belt:</p>
<p><img src="https://maxkohler.com/assets/unit-10/belt-attachment.jpg" alt="Belt attachment" /></p>
<p>It's a <a href="https://www.screwfix.com/p/mending-plates-zinc-plated-76-x-16-x-10-pack/16034">mending plate</a> mounted to the slide with two M4 screws. The belt is squeezed between it and the platform - originally I was going to use two plates below the platform, but this is simpler and has the added benefit of holding the belt up (which means it needs less tension). This way I don't need to worry about trying to join the two ends of the belt together. I can also adjust the tension when I need to.</p>
<p>I repeated this on both sides, connected by the driveshaft. I then connected the stepper (happy to report the EasyDriver survived my soldering) and it works!</p>
<p><video playsinline="" muted="" loop="" controls="" autoplay="" src="https://maxkohler.com/assets/unit-10/motor.mp4"></video></p>
<p>It's moving <em>very</em> slowly at the moment, but I should be able to fix that by going from 1/8 microstepping to 1/4 or 1/2 - effectively reducing the resolution by half and doubling the speed. The EasyDriver has two ports two do this which means I'll be able to adjust speed/resolution based on the drawing I'm trying to do.</p>
<p>Still missing a shaft support and stepper for the Y-axis.</p>
<h2 id="november-2%2C-2017">November 2, 2017</h2>
<p>I managed to double the speed of the slide using this logic table:</p>
<table>
<thead>
<tr>
<th>MS1</th>
<th>MS2</th>
<th>Microstep Resolution</th>
</tr>
</thead>
<tbody>
<tr>
<td>Low</td>
<td>Low</td>
<td>Full Step (2 Phase)</td>
</tr>
<tr>
<td>High</td>
<td>Low</td>
<td>Half Step</td>
</tr>
<tr>
<td>Low</td>
<td>High</td>
<td>Quarter Step</td>
</tr>
<tr>
<td>High</td>
<td>High</td>
<td>Eigth Step</td>
</tr>
</tbody>
</table>
<p><a href="https://learn.sparkfun.com/tutorials/easy-driver-hook-up-guide#hardware-overview">Logic table source</a></p>
<p>However apparently you get less torque the bigger the steps are? The lowest that would work reliably is quarter steps. <a href="http://www.geckodrive.com/microstep-full-step-torque">This article would suggest it's way more complicated</a></p>
<p>I've also laid out the x-axis platform, which needs to fit a stepper, the belt attachment, two shaft supports and its own belt support. I'm running the belt in between the two bearings so it shouldn't get stuck. I've realised the pen should probably go between the bearings as well to reduce leverage that could twist the platform.</p>
<p>Ordered some wire to connect the second stepper once it arrives.</p>
<h2 id="november-9%2C-2017">November 9, 2017</h2>
<p>The stepper arrived, and it fits into the mount perfectly (thanks, <a href="http://www.nema.org/Standards/About-Standards/pages/default.aspx">National Electrical Manufacturers Association</a>). Here's the full wiring setup with an Arduino, two EasyDrivers, and wires going off to the motors:</p>
<p><img src="https://maxkohler.com/assets/unit-10/wiring.jpg" alt="Wiring" /></p>
<p>I wrote the most basic script to run the machine I could think of. Here's some pseudo code:</p>
<pre><code>while i < 1000
Set stepper one to direction 0 or 1
Set stepper two to direction 0 or 1
Do 500 steps on each stepper
i++
</code></pre>
<p>This draws a sort of diagonal grid - the very first work to come out of the drawing machine! Mostly this is a way to have the machine moving continuously so I can tinker with things.</p>
<p>The whole thing is quite wobbly, however some of this might be remedied when I find a beter way than masking tape to attach the pen to the drawing platform. Bringing the pen as close to the slide as possible should help: reduces the amount of leverage.</p>
<p>Tracey makes the point that the wobbly-ness might be part of the work: My individual handwriting is showing through me not being able to drill a hole in th right place. Though I'm hoping to get things at least a little more steady.</p>
<p>On the Y-axis I'm clearly running at the upper limit of torque that the stepper can put out. When there is two much resistance it gets stuck and makes an awful noise. The range of motion is limited by the belt going into a skewed angle, but it's still over a metre.</p>
<p>I have full range of motion on the X-Axis, which is about 90cm. Because the slide is much lighter and shorter (hence less tension on the belt) there aren't any torque issues.</p>
<h2 id="november-10%2C-2017">November 10, 2017</h2>
<p>Killed my laptop.</p>
<p>I made the mistake of messing with the wiring while the machine was running. I hadn't soldered pins onto the <code>MS1</code>
and <code>MS2</code> switches on <a href="https://maxkohler.com/posts/2017-10-01-teaching-machines-to-draw/#october-3-2017">October 3</a>, so I just stuck jumper wires through the holes into the breadboard. One of these came loose, and when I tried to put it back, my laptop went black.</p>
<p>My best guess is that I somehow made a short, which sent 12V from the motor circuit into the Arduino and my laptop's USB port. The laptop went dark immediately and wouldn't turn on anymore, needed professional repair: Wrong assumptions:</p>
<ol>
<li>The Arduino is idiot-proof. It's clearly not.</li>
<li>12V isn't enough to do any harm. It clearly is.</li>
<li>Laptops have fuses in the usb ports. Maybe? My machine didn't need a board replacement, which indicates the 12V did get stopped somewhere.</li>
</ol>
<p>I've bought something called a <a href="https://www.amazon.co.uk/gp/product/B00HFUDI66/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1">USB Isolator</a> which is designed to prevent this exact thing from happening. It goes up to 30.000V - should do. Also not going to touch any wires while there's voltage on them again. I also put pins on the <code>MS1</code> and 2 switches, so no more loose wires.</p>
<h2 id="november-13%2C-2017">November 13, 2017</h2>
<p>Tutorial w/ Tracey</p>
<h2 id="november-15%2C-2017">November 15, 2017</h2>
<p>Did some drawings using my freshly fixed laptop (Now being extra careful and using the USB isolator). I figured out a way to reduce the vibration in the machine: Waiting about 100ms between each command. slow things down, but the results are much nicer.</p>
<p><video muted="" playsinline="" loop="" controls="" autoplay="" src="https://maxkohler.com/assets/unit-10/grid.mp4"></video></p>
<p>I took some measurements to work out how far the machine moves in a given number of steps. It's about 0.025mm per step (which seems way too precise, but I'm done messing with the microstepping resolution for the moment), or 40 steps in a millimeter. I got slightly different results for each axis (0.023mm/step on the x-axis). I'm assuming this is due to differences in the stepper motors (they come from different manufacturers) and inconsistencies in the overall construction of the machine.</p>
<p>Based on this data I expanded the driver code, so the machine is now aware of where it is at all times. This allows me to <a href="https://github.com/awesomephant/robotics/blob/6c8d4f32b5beba0490965abf3c7468a130d1f617/stepper-test.js#L93">move the pen to any point on the table</a>. By setting the two steppers to different speeds I can draw a straight line between arbitrary points. So far I've been using straight Javascript to make drawings - simple loops, random numbers etc. The next step will be to run SVG files through the machine.</p>
<p>I've <a href="https://github.com/awesomephant/robotics/blob/master/svgToInstructions.js">adapted a script I wrote earlier this year</a> to convert SVG files to machine instructions, but it looks like it needs some more work before it is usable. For shapes with straight lines (<code><polygon></code>, <code><line>)</code>, <code><rectangle></code> etc.) it just extracts the points. Shapes with Bezier curves in them are converted into straight line segments - if the resolution on this is high enough, it should look like a smooth curve in the drawings.</p>
<p><a href="https://github.com/awesomephant/robotics">This is my git repo for all of this.</a></p>
<h2 id="november-24%2C-2017">November 24, 2017</h2>
<p>I've had three drawings stolen from the studio, which I guess is some form of compliment. Some ideas by people I talked to about the machine:</p>
<ul>
<li>Everyone likes the little glitches and inconsistencies resulting from vibration, the motors getting stuck and ink bleeding out into the paper. This becomes especially visible in very repetitive pieces, where every little glitch stands out.</li>
<li>The ballpoint drawings especially have a print-like quality to them - a bit like etchings.</li>
<li>Make an explicit link to Sol Lewitt, maybe feed the machine actual Sol Lewitt instructions. (This seems possible using some modern language processing model - Microsoft Bot Framework being the one I've worked with before. Also might be interesting to get other people involved - if they can just write instructions in English and get the results back. A bit like how you used to hand your punchcards to a technician who would run the code overnight.</li>
<li>The idea that the machine reacts to its environment - me bumping into it, the belt getting stuck, people walking past it become visible in the drawings. Maybe have a whole group of people run past the machine and have it record the vibration (like a seismograph).</li>
<li>Find ways of feeding the machine other than Illustrator files - some degree of randomisation might lead to more interesting resutls</li>
<li>Do something where a human draws alongside the machine - similar to <a href="https://blog.google/topics/machine-learning/play-duet-computer-through-machine-learning/">this Google demo</a> in which a computer accompanies a human pianist through machine learning</li>
<li>Using music to feed the machine</li>
<li>Using found imagery (off Google images) to reproduce on the machine</li>
<li>Find ways of making images that are less obviously vector-based (this would probably be some kind of cross-hatching. I am interested in creating images that are a bit less defined, more focussed on tonal differences than lines.)</li>
<li>Go up in format, either A1 or A0 (I haven't measured, but I think the largest the machine can do is somewere between A1 and A0)</li>
<li>I should oil the machine (Yes)</li>
<li>I'm interested in using 3d imagery to feed the machine - maybe topographic maps or line renderings of 3d objects. A lot of the repetition drawings I've been making are already going in this diretion.</li>
</ul>
<p>In other news, the <a href="https://maxkohler.com/posts/2017-10-01-teaching-machines-to-draw/#october-31-2017">missing shaft support</a> is finally on the way. It should help make the machine more stable, maybe solve some of the issues with the y-axis getting stuck.</p>
<h2 id="november-25%2C-2017">November 25, 2017</h2>
<p>Here's some of the images I made on the drawing machine this week. Most of were designed in Illustrator, with any randomness coming only through the machine itself (by it getting stuck, someone bumbping into it, teh pen running out of paper etc.)</p>
<p><img src="https://maxkohler.com/assets/unit-10/week-1/machine-drawing-1.jpg" alt="Machine drawing 1" />
<img src="https://maxkohler.com/assets/unit-10/week-1/machine-drawing-2.jpg" alt="Machine drawing 2" />
<img src="https://maxkohler.com/assets/unit-10/week-1/machine-drawing-3.jpg" alt="Machine drawing 3" />
<img src="https://maxkohler.com/assets/unit-10/week-1/machine-drawing-4.jpg" alt="Machine drawing 4" />
<img src="https://maxkohler.com/assets/unit-10/week-1/machine-drawing-5.jpg" alt="Machine drawing 5" />
<img src="https://maxkohler.com/assets/unit-10/week-1/machine-drawing-6.jpg" alt="Machine drawing 6" />
<img src="https://maxkohler.com/assets/unit-10/week-1/machine-drawing-7.jpg" alt="Machine drawing 7" />
<img src="https://maxkohler.com/assets/unit-10/week-1/machine-drawing-8.jpg" alt="Machine drawing 8" /></p>
<h2 id="december-20%2C-2017">December 20, 2017</h2>
<p>TODO instagram images</p>
<h2 id="january-19%2C-2018">January 19, 2018</h2>
<p>Emma suggests I look at <a href="http://cameronrobbins.com/wind-drawings/">Wind Drawings by Cameron Robbins</a>. As he describes it,</p>
<blockquote>
<p>The Wind Drawing Machines are installed in different locations to receive weather energy and translate it into an abstract format of ink drawings on paper. [...] The machines respond to wind speed and wind direction, and allow rain and sun to also play on the drawings. The principle employed here is that the wind direction orients a swiveling drawing board connected to a wind vane, while the wind speed drives a pen on a wire arm around in a cyclical motion.</p>
</blockquote>
<p>I like the notion that these are abstract drawings, but also in some sense a very accurate record of a specific place at a certain time. Similar maybe to <a href="http://www.samwinston.com/projects/">Sam Winston's work</a>. The first thing I thought of was doing something with <a href="https://www.metoffice.gov.uk/datapoint/product/list">Met Office Data</a>, but that seems contrived.</p>
<h2 id="january-24%2C-2018">January 24, 2018</h2>
<p>TODO Random pixel shading</p>
<h2 id="january-25%2C-2018">January 25, 2018</h2>
<p>TODO Random pixel shading layering</p>
<h2 id="january-26%2C-2018">January 26, 2018</h2>
<p>I managed to source some <a href="https://www.amazon.co.uk/gp/product/B01LY6W4MW/ref=oh_aui_detailpage_o03_s00?ie=UTF8&psc=1">CMYK ballpoints</a>. Unfortunately, they only come in a pack of 20 together with 4 other colours that are less useful. Here's the first four-colour drawing I did, using the randomized pixel method:</p>
<p>Here's one using regular pixels, using [this Warhol print] (Primarily because it has bright colours, secondly because it continues the tradition of using communist leaders as test subjects):</p>
<p>I prefer the second one a lot. Each pixel is inked much more evenly, leading to cleaner colours. The square pixels also make it easier to align each layer, although getting it perfect seems pretty difficult. I did, however, run this drawing at quarter step to save time, so there should be room for improvement if I'm willing to wait twice as long. Having developed this method of layering colours on top of each other, I'm now effectively screenprinting (as opposed to doing line drawings).</p>
<p>Some more insights from this test:</p>
<ul>
<li>As with earlier layered drawings, it's best to reduce the density of each layer. Otherwise you end up with thick blotches of ink with no added detail. This has the added benefit of speeding up drawing time.</li>
<li>I printed CMYK in that order because that's how it's done in normal printing.</li>
<li>As Jake points out, the colour mixing in CMYK comes from a combination of overprinting and optical mixing. In the above test, the colours seem to be pretty accurate.</li>
</ul>
<h2 id="january-28%2C-2018%3A-another-drawing-machine">January 28, 2018: Another Drawing Machine</h2>
<p>Over the weeked I made the decision to build another drawing machine. I've been struggling for ages to find some sort of "interactive mode" for the original drawing machine. I tried building some sort of shape recognition software that would allow you to draw shapes and have the machine interpret them. I also talked about building a language-processing system that would allow people to write Sol Lewitt-style instructions and have the machine interpret them. None of that seemed too promising.</p>
<p>So the solution is to build a second drawing machine, one that is designed to be an interactive installation. It's going to look something like this:</p>
<p>TODO add sketch</p>
<p>The plan is to have it done by the <a href="https://maxkohler.com/posts/2017-10-01-teaching-machines-to-draw/#friday-febuary-2-2018">Friday</a>.</p>
<p>I've already written some of the <a href="https://github.com/awesomephant/sineMachine">control code</a>. I'm using socket.io to display the function graphs in real time.</p>
<p>I'll need the following parts:</p>
<ul>
<li>2x NEMA 17 Stepper</li>
<li>2x NEMA 17 mounting bracket</li>
<li>2x Stepper Driver (Easydriver)</li>
<li>12V Power supply</li>
<li>Breadboard barrel jack</li>
<li>6x Potentiometer (ie. knobs)</li>
<li>10x Binary Switch</li>
<li>Jumper wire</li>
<li>Breadboards</li>
<li>Arduino Uno (Another One)</li>
<li>A wooden plank to mount everything on</li>
<li>A wooden board to become the control panel</li>
<li>Fishing line</li>
<li>Various fixings</li>
</ul>
<p>Things I'm not sure about yet:</p>
<ul>
<li>How do I mount a pen to the thing?</li>
<li>How do I attach the fishing line to the motor shafts?</li>
<li>How do I attach the drawing machine to the wall in such a way that the pen is in contact with the paper?</li>
</ul>
<p>Using Wikipedia, I managed to cobble together the following functions to generate <a href="https://en.wikipedia.org/wiki/Sine_wave">sine</a>, <a href="https://en.wikipedia.org/wiki/Triangle_wave">triangle</a>, <a href="https://en.wikipedia.org/wiki/Sawtooth_wave">sawtooth</a> and <a href="https://en.wikipedia.org/wiki/Square_wave">square</a> waves that will eventually control the motors. $$a$$ is the amplitude, $$p$$ is the period, $$o$$ moves the curve up and down and $$\varphi$$ moves the curve from left to right (I'm using this to animate it on screen).</p>
<p>$$\DeclareMathOperator{\sgn}{sgn}$$
$$\DeclareMathOperator{\atan}{atan}$$
$$\DeclareMathOperator{\asin}{asin}$$
$$\DeclareMathOperator{\cotan}{cotan}$$</p>
<p>Sine: $$f(x) = a\sin(2\pi px + \varphi) + o$$</p>
<p>Square: $$f(x) = a\sgn\big[\sin(2\pi px + \varphi)\big] + o$$</p>
<p>Triangle: $$f(x) = a\arcsin\big[\sin(\frac{2\pi}{p}x + \varphi)\big] + o$$</p>
<p>Sawtooth: $$f(x) = \frac{-2a}{\pi}\arctan\big[\cot(\frac{x\pi+\varphi}{2p})\big] + o$$</p>
<h2 id="january-29%2C-2018">January 29, 2018</h2>
<p>I've started construction on the second drawing machine. I'm using a wooden clipboard from the college art shop for the control panel - seems appropriately haphazard. Apparently I'm the first person to ever buy one of these in the art shop - it took them about 5 minutes to find the price in the register.</p>
<p>I drilled holes to mount six potentiometers and wired them to the Arduino's analogue inputs:</p>
<p>Then I plugged their readings into the code for the sine functions, and to my amazement it worked on the first try. You can twist the knobs and watch the curves on the screen change in real time. <em>Insert mad scientist laughter here</em>. There seems to be some interference between some of the potentiometers - manipulating one changes the readings of other ones in the series. Apparently this is because <a href="http://forum.arduino.cc/index.php?topic=18874.0">some of them have too much resistance</a>. I'll replace and see if that's fixes it.</p>
<h3 id="peer-assesment">Peer assesment</h3>
<p>TODO make the 100 drawings book
publication</p>
<h2 id="january-31%2C-2018">January 31, 2018</h2>
<p>The flip switches for drawing machine two arrived. I'm planning to use these to</p>
<ol>
<li>Switch between different functions for each stepper</li>
<li>Toggle some sort of randomisation for each funtion parameter on each stepper</li>
</ol>
<p>I'm focussing on one for the moment. Since there are four functions to choose from, I can combine two switches to generate four possible positions by thinking of each switch as a digit in a binary number:</p>
<table>
<thead>
<tr>
<th>Switch A</th>
<th>Switch B</th>
<th>Binary</th>
<th>Result</th>
</tr>
</thead>
<tbody>
<tr>
<td>Open</td>
<td>Open</td>
<td><code>00</code></td>
<td>Sine</td>
</tr>
<tr>
<td>Open</td>
<td>Closed</td>
<td><code>01</code></td>
<td>Triangle</td>
</tr>
<tr>
<td>Closed</td>
<td>Open</td>
<td><code>10</code></td>
<td>Square</td>
</tr>
<tr>
<td>Closed</td>
<td>Closed</td>
<td><code>11</code></td>
<td>Sawtooth</td>
</tr>
</tbody>
</table>
<h2 id="febuary-1%2C-2018">Febuary 1, 2018</h2>
<p>Got the second drawing machine working today:</p>
<p><video playsinline="" muted="" loop="" controls="" autoplay="" src="https://maxkohler.com/assets/unit-10/machine-2.mp4"></video></p>
<p>I found out that to draw a circle (and similar shapes), the two functions need to be on different phases - otherwise you just get straight lines. At the moment I'm doing this by adding a hard coded number to $$\varphi$$, but that's not the best solution. I can't just add another knob because there are no analogue inputs left on the Arduino. However, I could add another switch to add a secondary function to the "offset" knobs.</p>
<p>This will make the machine needlessly complicated and more annoying to use. So I'm definitely doing it.
Problems:</p>
<ul>
<li>It runs pretty slow (but switching to quarter- or half stepping should fix that)</li>
<li>There is quite a bit of noise because the motors always run at full speed, coming to a sudden stop (I assume while math is being executed) and running at full speed again. A possible fix might be to run the motors slower when there is less space to cover - this way each "movement section" would be equal in length, which would at least make the noise more uniform. A different type of string (ie. fishing wire) might also help.</li>
<li>The pencil might not be the best drawing instrument to use here. While I like the hazy, out-of-focus shapes it creates, it doesn't produce results quickly enough for a possible gallery installation. I'll try out some different pens and felt-tips, as well as a softer pencil and charcoal.</li>
<li>You can't make a shape the size of the paper at the moment. This should just be a matter of tweaking <a href="https://github.com/awesomephant/sineMachine/blob/master/index.js#L75">this line in the control script</a>.</li>
</ul>
<h2 id="febuary-9%2C-2018">Febuary 9, 2018</h2>
<h3 id="a-new-plan-for-action">A new plan for action</h3>
<p>Following the mid-term review with Tracey and conversations with various people.</p>
<p>The drawing machine project needs a point. I think the point is the following:</p>
<ul>
<li>When I was doing the first set of drawings back in <a href="https://maxkohler.com/posts/2017-10-01-teaching-machines-to-draw/#november-25-2017">November</a>, I was drawn to the ones where the lines start creating grey values - essentially where the drawing starts to move away from being a mathematical line diagram and towards being something more human.</li>
<li>That's why I started doing</li>
</ul>
<h3 id="book-structure">Book Structure</h3>
<ul>
<li>Drawing machine progress (chronological)</li>
<li>Background research</li>
<li>Drawing machine progress</li>
<li>Original research document</li>
<li>Drawing machine progress</li>
</ul>
<p>The machine learning publication could be in a similar format.</p>
Dissertation: How does the conflict between collectivist utopia and individualism in modernism manifest itself in housing architecture?2017-11-01T10:00:00Zhttps://maxkohler.com/posts/2017-08-27-dissertation/<p>What they were working towards was vision for society in which citizens, architecture, product design, agriculture, entertainment, science and art would exist together in one unified, rational programme: Modernism. To the young people at the Bauhaus, overlooking the rising industrial town of Dessau from their glass-wrapped studios this idea must have felt utterly within reach: In a country still struggling to recover from the First World War, with violent revolutions going on in Europe and new technology changing every aspect of life, change seemed inevitable. (Wilder, 2016)</p>
<p>How exactly that change should look like, the Bauhausler never quite agreed on. The early Bauhaus was driven by the search for individual expression. Johannes Itten, with his head shaved and wearing a robe of his own design, taught the now-famous Vorkurs: Here, students developed their personal means of expression through meditation, philosophy and basic exercises (Bauhaus100.de, 2017).</p>
<p>The Bauhaus started to move toward a more collective outlook in 1922, when Theo van Doesburg, a proponent of De Stijl began teaching at the Bauhaus. He introduced the reduction to geometric shapes and primary colours that would come to define the "Bauhaus Style". The following year Hungarian artist Laszlo Moholy-Nagy took over teaching of the preliminary course. He replaced much of Itten's ecclectic curriculum with exercises using industrial material. In the following years, objectivity and scientific rigor remained the governing thought at the Bauhaus. It was during this later period that Marcel Breuer produced furniture out of precision steel tube, Marianne Brandt designed geometric household items and Walter Gropius completed some of the most iconic examples of modernist architecture (Droste, 1998).</p>
<p>Despite its academic success the Bauhaus was faced with political pressure from its inception. The increasingly right-wing government of Weimar forced the Bauhaus to move to Dessau in 1925. When the Nazis came to national power in the 1930s, the Bauhaus again to Berlin where, after a brief period under the leadership of Mies van der Rohe, the school disbanded in 1933. Many former Bauhausler were forced to flee Germany, which of course only served to spread Bauhaus ideas. Gropius, Breuer, Mies and others continued to teach in the United States, contributing to the emergence of the International Style. (Wilder, 2016)</p>
<p>The architectural legacy of the Bauhaus surrounds us to this day. I'm writing this from a 1960s university building with steel windows, concrete slab floors and curtain walls not dissimilar to what Gropius used 40 years prior in Dessau. Similar buildings can be found in cities all over the world. However, desipite its ubiquity, modernist architecture, particularly in the context of social housing, has been a point of contention for the better part of a century. Critics like Nikolaus Pevsner describe modernist housing developments as "impersonal and megalomaniac creations" (Fletcher, 2008), incapable of meeting the diverse needs of their residents. This gets to the conflict that this essay sets out to explore: The apparent contradiction between individualism the collectivist utopia of modernism — a contradiction that is deeply embedded within the modernist movement and the changing perception of its products over the course of the 20th century.</p>
<h2 id="two-conflicts">Two Conflicts</h2>
<p>When we ask about the conflict between individualism and collectivism in modernism, we should start by defining the conflict. In fact, we can identify two different conflicts at play simultaneously:
First, there is the conflict between the collectivist, egalitarian vision of modernism and the image of the heroic, sole creator (be it Le Corbusier, Walter Gropius or Mies van der Rohe) who is tasked to bring that vision to life. This is linked to Marianne DeKoven's (2011) analysis of modernism and gender, which places the myth of the (male) hero artist at the very centre of modernist thinking.
Secondly, I'm going to examine the conflict between modernist architecture and the individualism of the people inhabiting it in the context of post-war consumerism. This conflict is defined on one side by the collectivist utopia of the Bauhaus: A world in which housing, transportation, appliances, culture and food is designed through a scientific process and mass-produced by machines to be affordable to everyone. By reducing forms to their functional minimum, the Bauhaus aimed to create universal solutions for housing, education and everyday life.
On the other side is the populist critique of those <em>universal solutions</em> as being fundamentally at odds with the people's inherent individualism — a critique epitomised by the image of the derelict housing block. THis line of attack which originates in the 1970s with Oscar Newman's (1972) study "Defensible Space: People and Design in the Violent City", which links modernist housing with increased crime, arguing that the spatial design of housing blocks makes them inherently unsafe, and that private space, rather than public space should be prioritised. Newman's work has since been criticised for its overly broad assumptions about the nature of human interaction (Steventon, 1996). Popular critics such as Tom Wolfe (1981) criticise modernist housing as being overly academic and fundamentally unfit for its purpose. Wolfe cites the widely publicized demolition of the Pruitt–Igoe housing estate in St. Louis in 1972 (only 20 years after its construction) as evidence for the failure of the collectivist ideas of modernism.
This popular rejection of modernist housing models on the grounds that it doesn't reflect people's individualism can be linked to the emergence of modern consumerism in the second half of the 20th century. As Miles (1998) shows, consumer culture emerges as a result of an increase in real wages and improved production methods suddenly making commodities available to large parts of the population. This leads to a shift from fordist principles of large-scale production and mass-market appeal to post-fordist production, in which a diversified workforce creates products designed for smaller and smaller sub-sets of consumers. As a result of this, consumption becomes a cultural act — a way of asserting your identity, belonging to a particular group or having a certain level of status. Crucially, the idea of consumer freedom is linked to the idea of political freedom, as Slater (1997) argues:</p>
<blockquote>
<p>To be a consumer is to make choices: to decide what you want, to consider how to spend your money to get it [...]. 'Consumer souvereignty' is an extremely compelling image of freedom: [...] it provides one of the few tangible and mundane experiences of freedom which feels personally significant to modern subjects. (slater 1997, p27)</p>
</blockquote>
<p>According to Slater, this link between <em>consumer choice</em> and political freedom is especially pronounced in the 1980s, when "collective and social provision gave way to radical individualism — as Thatcher put it, 'There is no such thing as a society, only individuals and their families'" (Slater 1997, p10).
The idea of individual consumer freedom is pitched as the polar opposite of pre-war ideas of collectivism — the subsequent rejection of modernist housing models isn't much of a surprise.
To see how these two conflicts manifest themselves in physcial architecture I'm going to introduce the architectural practice at the Bauhaus, placing it in the wider context of modernist thinking. I will then examine the Dessau-Törten housing settlement near Leipzig, Germany as an example of this practice. Built by Walter Gropius between 1927 and 1930, Törten has been subject to significant alterations by its residents over the last 90 years. By tracking these alterations, I will show how these underlying conflicts shift and overlap over time. In closing, I will examine more recent housing models in the context of a post-industrial economy, again discussing how the conflict between individualism and collectivist ideas is reconciled.</p>
<h2 id="the-myth-of-the-hero-creator">The Myth of the Hero Creator</h2>
<p>The emergence of modernism in the beginning of the 20th century coincides with the first wave of feminism. The modernist focus on the machine, speed, efficiency (which were perceived as trditionally male attributes) and opposition of ornament and sentimentality (which were regarded as female) is seen by critics as a reactionary response by male modernists to the new, empowered woman (Dekoven, 2011). We see this reflected in the openly misogynist language of the 1909 futurist manifesto:</p>
<blockquote>
<p>We will glorify war—the world's only hygiene—militarism, patriotism, the destructive gesture of freedom-bringers, beautiful ideas worth dying for, and scorn for woman.
We will destroy the museums, libraries, academies of every kind, will fight moralism, feminism, every opportunistic or utilitarian cowardice.</p>
</blockquote>
<p>(Marinetti, 1909)
Here Marinetti is laying out the idea of the heroic, hyper-male creator — a notion that is ultimately reflected in the cult of personality of fascism, which the Futurist movement supported (Blum, 2014).
This regressive notion of the authoritarian male artist standss in contrast to the egalitarian aims of the modernist movement, which included the empowerment of women. DeKoven points out:</p>
<blockquote>
<p>[...] Male Modernist fear of women’s new power [...] resulted in the combination of misogyny and triumphal masculinism that many critics see as central, defining features of Modernist work by men. This masculinist misogyny, however, was almost universally accompanied by its dialectical twin: a fascination and strong identification with the empowered feminine. (DeKoven, 2011, p. 228)</p>
</blockquote>
<p>DeKoven is talking about this contradiction in the context of modernist literature here, but I would argue that her analysis can be expanded to architecture: The figure of the sharply dressed hero architect (be it Le Corbusier, Walter Gropius or Mies van der Rohe) who is literally tasked with designing the new world stands in contrast to the egalitarian, collectivist vision of society the modernist movement was working toward.</p>
<h2 id="from-marinetti-to-gropius">From Marinetti to Gropius</h2>
<p>It is possible to draw a direct lineage from the Futurist movement to the Bauhaus. In 1910, Adolf Loos echoes Marinetti's denunciation of ornament (though with a Darwinian twist, arguing that "<em>cultural evolution</em> is equivalent to the removal of ornament"). Loos goes on to say that</p>
<blockquote>
<p>[...] Ornament is not only produced by criminals; it itself commits a crime, by damaging men's health, the national economy and cultural development. [...] Even greater is the damage ornament inflicts on the workers. As ornament is no longer a natural product of our civilization, it accordingly represents backwardness or degeneration [...] (Loos, 1910)</p>
</blockquote>
<p>The notion that ornament is to be overcome in order to achieve progress is reflected in Gropius' 1919 Bauhaus manifesto:</p>
<blockquote>
<p>The ornamentation of the building was once the main purpose of the visual arts, and they were considered indispensable parts of the great building. Today, they exist in complacent isolation, from which they can only be salvaged by the purposeful and cooperative endeavours of all artisans.</p>
</blockquote>
<p>"The ultimate goal of all art" at the Bauhaus, as Gropius goes on to declare, is architecture. He then explains how "the new building" would unite all artistic disciplines — again echoing the Futurists' denunciation of the past:</p>
<blockquote>
<p>So let us therefore create a new guild of craftsmen, free of the divisive class pretensions that endeavoured to raise a prideful barrier between craftsmen and artists! Let us strive for, conceive and create the new building of the future that will unite every discipline, architecture and sculpture and painting, and which will one day rise heavenwards from the million hands of craftsmen as a clear symbol of a new belief to come.' (Gropius, 1919)</p>
</blockquote>
<p>It is worth highlighting Gropius' use of mediaeval imagery to talk about the future. The concept of craft guilds, which Gropius refers to in the opening sentence dates back to the 13th century. Further, the "building [...] that will unite every discipline [...] and which will rise heavenwards" is a clear reference to the medieaval cathedral, which is confirmed by Lionel Feininger's woodcut "Cathedral" (1919) used to illustrate the text (Burshart, 2009).
Although these mediaeval aesthetics seem opposed to the Marinetti's vision of a mechanised future, the underlying ideas are consistent. There is the denunciation of the past, the rejection of (female) ornamentation in favour of (male) clarity and objectivity. Using almost biblical language, the (male) architect is positioned as a hero figure tasked to build a better society. The contradiction described by DeKoven is perhaps epitomised in Gropius' admission policy: While women were allowed at the Bauhaus (a progressive move in 1919), Gropius made sure they were funneled into the weaving and painting workshops — the architecture department was exclusively male (Droste, 1999).</p>
<h2 id="architecture-at-the-bauhaus">Architecture at the Bauhaus</h2>
<p>Following the revivalist imagery of the manifesto, work at the early Bauhaus was defined by a return to pre-industrial forms. The Sommerfeld House in Berlin (1920, destroyed 1945) with its expressionist wooden decoration, as well as the early furniture of Marcel Breuer and Gunta Stölzl and some of the early pottery are examples of this phase (Bauhaus100, 2017). Christina Lodder places the early Bauhaus as part of a larger artistic movement in search of "spiritual utopia". She argues that "a rejection of materialism and 19th-century positivist outlooks" following the First World War inspired expressionist artists "to infuse [their work] with a spiritual dimension, and to promote the idea that art and architecture were thereby the means of saving mankind from modernity" (Lodder, 2008, p. 24).</p>
<p>The transition to a more rational, technology-focused outlook at the Bauhaus came in 1922 with the arrivals of Theo van Doesburg and Lazlo Moholy-Nagy in Weimar. This new direction was defined by the notion that scientific progress, industrial production and rational decision-making could be employed to solve the "materialism, repressive political structures and glaring social inequalities" of the present (Lodder, 2008, p. 33). From the prespective of modernists, crime, disease, alcoholism and scoial inequality were directly linked to the "overcrowded cities", "old and rotten buildings and poor sanitary conditions" (Le Corbusier, 1923) that industralisation had left behind.</p>
<p>A 1930 film titled 'Die Neue Wohnung' [The New Dwelling] illustrates this idea in striking images (fig. 1). Dark shots of derelict workers' homes are interspersed with scenes of domestic violence and disease. This is then set in contrast to the mdoernist vision of the future: Brightly lit shots of clean interiors with mass-produced, ornament-free furniture. The film ends with a title card announcing: "A better future will hold affordable and humane housing FOR EVERYONE" (Richter, 1930) — emphasizing the aspiration for social equality that imbues modernist thinking.</p>
<p><img src="https://maxkohler.com/assets/unit-9/die-neue-wohnung-web.jpg" alt="Das Neue Wohnen" />
<strong>Figure 1:</strong> Video stills from 'The New Dwelling' ['Die Neue Wohnung'], a 1930 film showing the benefits of modernist housing.</p>
<p>The change in lighting from dark to light in The New Living is no accident: Access to sunlight and air is a central aim of modernist architecture. This can be linked to the belief the benefits of heliotherapy (the idea that sunlight and air could cure diseases), which was widespread in the 1920s, as was the notion that personal hygiene and cleansiness would lead to a better society (Wilk, 2006). We see Gropius implementing these ideas in Törten by using unusually large windows combined with relatively small floorspace, and floors and furniture that would be easy to clean.</p>
<p>Gropius lays out his own version of these ideas in his 1925 book "Ein Versuchshaus des Bauhauses in Weimar" [A trial building by the Bauhaus in Weimar]. The title refers to the Haus am Horn in Weimar, which was built for the Bauhaus exhibition in 1923. In the introduction Gropius argues that the new age "makes it necessary to finally realise the old idea of building typical dwellinggs cheaper, better and in larger numbers to give every family access to healthy living conditions" (Gropius 1930, page 5). The way to achieve this, according to Gropius is to understand the housing problem "in its entire sociological, economical, technical and formal context". (Gropius 1930, page 5). Gropius also offers specific ideas on how these issues might be addressed.
He argues that because most people have similar basic needs, housing should be uniform and mass-produced in specialised factories. Rather than building houses individually at the building site, they should be dry-assembled from premanufactured components using standardised blueprints. Gropius coins the term "large-scale building blocks" ['Baukasten im Grossen'] to describe this form of modular architecture. Figure 2 illustrates this idea: Individual components (labeled 1 through 6) are assembled into different "machines for living" according to the "number and needs of the inhabitants".</p>
<p><img src="https://maxkohler.com/assets/unit-9/baukasten-im-grossen.jpg" alt="Large Scale Building Blocks" />
<strong>Figure 2:</strong> Illustration showing the concept of Gropius' "Large-Scale Building Blocks", published in Bauhausbuch 3: Ein Versuchshaus des Bauhauses in Weimar, 1925. Unknown artist, Walter Gropius.</p>
<p>The artistic challenge, according to Gropius, lies in finding satisfying spatial arrangements of these building blocks. Gropius briefly mentions the idea that contrary to popular belief, smaller, well-lit room might actually lead to better living conditions — again echoing the common belief in the benefits of sunlight. (Gropius, 1930).</p>
<h2 id="the-dessau-t%C3%B6rten-settlement">The Dessau-Törten Settlement</h2>
<p>Over 50 building projects were completed by members of the Bauhaus between 1919 and 1930, and many more after the school disbanded (Engels, 2001). This count includes buildings that were built by Gropius and others relatively independently from the Bauhaus, but some were the result of the type of cross-discipline collaboration that was at the core of the Bauhaus idea. These large-scale projects addressed real-world issues while at the same time serving as classroom experiments at the Bauhaus.
A few of these buildings have become instantly recognisable: The Sommerfeld House (referenced above), the Haus am Horn (1923) and the Bauhaus building and Master's houses in Dessau (1925-26). However, it is the lesser-known examples of Bauhaus architecture that might give us the most insight into the conflict between individualism and collectivism. The Dessau-Törten housing settlement in Dessau, Germany is one such example. Unlike other examples of pre-war modernist architecture, Dessau-Törten was subject to significant changes since its construction between 1926 and 1928 (Bauhaus Dessau, 2017). In a form of modern archeology, we can identify different layers of changes made before, during and after the Second World War up to present day. Each layer can give us hints as to how the conflict between modernist ideas and different forms of individualism was reconciled.</p>
<p><img src="https://maxkohler.com/assets/unit-9/torten-electrical.jpg" alt="Dessau-Törten establishing shot" />
<strong>Figure 3:</strong> Photograph showing Dessau-Törten shortly after its completion, ca. 1926.</p>
<h2 id="initial-construction">Initial Construction</h2>
<p>After a planned housing project in Weimar had failed to materialise (only one building, the Haus am Horn, was ever completed), Törten was the first opportunity for Gropius to put the ideas he had developed during the early 1920s to practice on a large scale. In fact the prospect of building Törten, supported by the social-democratic government of Dessau, was part of the reason Gropius moved the Bauhaus to Dessau in the first place. Dessau, a rising industrial town, had seen an an influx of workers that nearly doubled its population. This led to a housing crisis which the Bauhaus was hoped to address. (Dessau-Törten, 2017)
Törten was financed in part by the national government as part of a larger effort to provide affordable housing to lower-income families. Individual units were sold for between RM 9.500 and RM 10.100, or around RM 35 per month — well within the reach of an average industrial worker (Gropius, 1930). The fact that units were sold off to individuals is critical: It allowed homeowners to make changes to their houses with few restrictions.</p>
<p><img src="https://maxkohler.com/assets/unit-9/torten-waterfall.jpg" alt="Dessau Waterfall" /></p>
<p><strong>Figure 4:</strong> Waterfall chart showing the order of contstruction phases in Dessau-Törten</p>
<p>The settlement served a double, or even triple purpose from the beginning. The Dessau government was hoping for a pragmatic solution to their housing shortage. The national government saw Törten in part as a research project to test new construction methods, granting Gropius additional funds in 1928 to carry out construction experiments and publish the results. Finally, Gropius saw Törten as a way of proving the validity of his vision of architecture, which he had written about for years (Schwarting, 2012).</p>
<p>This triple purpose reflects the conflict at the centre of this essay. One one side is the city of Dessau in a pragmatic effort to provide workers' housing. On the other side is Gropius, the hero architect eager to prove his ideas. We can see Gropius' eagerness reflected in the amount of documents, photographs and films documenting the construction process of Törten. Figure 3 shows what today might be referred to as a waterfall chart. Each bar indicates a specific step in the construction to be carried out at a particular time. This illustrates how Gropius not only designed the architecture, but also the production process and the documentation to fit his vision. He writes about the production process in 1930:</p>
<blockquote>
<p>The execution of the shells was done based on a carefully designed plan, in such a way that fixings, wall components and ceiling beams could be manufactured at the building site in a conveyor belt-like process. This method effectively limited loss of time and material [...] (Gropius, 1930)</p>
</blockquote>
<p>In addition to the construction process, Fordist production methods also seem to have inspired the visual language of Törten. In addition to familiar clues of modernist architecture (flat roofs, exposed construction through different surface treatments, factory-like steel fixings) Gropius emplys mirrored floor plans, positioning doors and windows of each unit at oppsing edges of the facade. This allows Gropius to effectively blur the line between units, creating the effect of a continuous band rather than a row of individual houses - Le Corbusier's vertical "Machine for Living" becomes a horizontal "living conveyor belt" (Schwarting, 2011).
In what we might read as a heroic gesture (by the hero architect), Gropius transforms a pre-existing electrical tower into a monument to technological progress by placing it at the intersection of the two main roads, making it visible from almost every point of the settlement (fig. 3).</p>
<p><img src="https://maxkohler.com/assets/unit-9/torten-1920-2.jpg" alt="Dessau-Törten Gardens" />
<strong>Figure 5:</strong> Contemporary photograph showing the garden side of row houses in Dessau-Törten.</p>
<p>Contrary to the modernist visual language, the urban planning of Törten follows the much earlier concept of the garden city. The notion of the garden city was first proposed by Ebeneezer Howard in 1889, the garden city is based on the idea of individual self-sufficiency for each family (Ward, 1992). In Törten this takes the form of a 400 square metre garden attached to each dwelling (Fig. 3). This runs contrary to the idea of the minimum dwelling that Gropius alludes to in his writings. Rather "rationalising" living functions by centralising them as called for by proponents of the minimum dwelling (Teige, 2012 page 344), the garden city spreads out food production, preparation and recreational space across each individual dwelling.</p>
<p><img src="https://maxkohler.com/assets/unit-9/torten-construction-1.jpg" alt="Dessau-Törten Construction" />
<strong>Figure 6:</strong> Contemporary photograph showing a row of houses in Dessau-Törten under construction.</p>
<p>Figure 6 shows a section of houses in Dessau-Törten under construction. It highlights the rationalised construction method Gropius designed: Rather than building each house individually, a whole section of identical houses was built at once, allowing for greater efficiency. This photograph is part of an extensive series of professionally-executed photographs documenting the construction process. The fact that such extensive documentation was done can be attributed in part to the experimental nature of Törten — Gropius' experiments had to be documented to be scientifically valid. However, this photograph is more than a neutral document: The two small figures in the background and the dramatic lighting conditions give a monumental scale to the scene. The row of houses continues beyond the right edge of the photograph, reinforcing the effect of an endless conveyor belt. All of this invokes the feeling of optimism and larger-than-life ambition that imbues Gropius' architecture.
This image is featured along others in Gropius' 1930 book "Bauhausbauten Dessau", which suggests that Gropius was not only aware of the image, but approved of its message. In addition, a Berlin production company was comissioned to create a documentary showing the construction of the settlement, further underlining Gropius' view of Törten as a vehicle to communicate his ideas (Paulick, 1926).</p>
<h2 id="alterations-before-the-second-world-war">Alterations before the Second World War</h2>
<p>We see the first major deviation from Gropius' plan the day the first families moved in. Few families could afford the RM1350 for Marcel Breuer's specially designed furniture set, so they brought in an ecclectic collection of traditional furniture, wallpapers and curtains, which made an awkward fit in Gropius' small floor plans (Schwarting, 2012). This is perhaps the first instance in which Gropius' ambition runs up against the economic realities of the 1920s.</p>
<p>Following the initial construction, heat insulation quickly became a concern. Anectotal accounts describe how doors and windows would freeze shut during the first winter (Dessau-Törten, 2017). This was a direct result of shortcomings in the design.</p>
<ol>
<li>
<p>The steel frame, single glass windows chosen by Gropius were cutting-edge technology at the time. As such, they were not only a third more expensive than traditional wooden windows but also caused major heat loss due to their size and lack of insulation. Most of these windows were replaced smaller, wooden windows within 10 years of the initial construction.</p>
</li>
<li>
<p>The thin outer walls formed another route for heat to escape. This was a direct result of their industrialised production — since the concrete slabs used to build the walls all had the same dimensions, heat bridges could form between the gaps. Homeowners addressed this by erecting secondary brick facades shortly after the initial construction. (Schwarting, 2011)</p>
</li>
</ol>
<p>I would argue that this set of changes can be read as the homeowners' collective response to Gropius' heroic ambitions. Gropius, an established architect, would likely have been aware of the heat insulation issues caused by his choice of construction method and window fittings. The fact he proceeds anyway speaks to the conflict between Gropius' desire to make a clean break with the past and to share his vision of the future and the needs of the people at the centre of that vision. Yes, steel windows might be more expensive now, we could imagine him reasoning, but they will be much cheaper once industrial production has caught up. Gropius says as much in 1930, admitting that it would take the construction industry some time to adjust to the new way of building (Gropius, 1930). In this case, the individualised ownership of the dwellings gives the people upper hand in this conflict.</p>
<p>When the Nazis come to power in the 1930s, they mount a concerted effort to replace all of Gropius' steel windows that still remained. This removed the visual effect of Gropius' "conveyor belt" by re-emphasising the lines between neighbouring houses. The addition of fences, hedges and flower beds between dwellings added to this (Schwarting, 2011).</p>
<p>This came after earlier plans to rebuild Törten from the ground had proven financially unviable and was fashioned into a propaganda victory by the right-wing press at the time. About half an hour up the road, the Bauhaus building itself was surrounded by a grouping of pitched-roof, traditionally-built apartment buildings during this period (Bauhaus building, 2017).
While a full survey of fascist aesthetics is beyond the scope of this essay, we can place this coordinated effort to alter existing architecture, emphasising individual expression over the collective in the broader context Nazi propaganda. As Koepnick (1999) points out,</p>
<blockquote>
<p>[...] the Nazis followed two different but overlapping strategies. In their pursuit of a homogenous community of the folk, the Nazis made numerous concessions to the popular demand for the warmth of private life and pleasure in a modern media society [...] but simultaneously hoped that the [...] spectacle of modern consumer culture would break the bonds of old solidarities and prepare the atomized individual for the auratic shapes of mass politics, for mass rituals that promised a utopian unification of modern culture (Koepnick, 1999)</p>
</blockquote>
<p>This would explain why the Nazi government went to such lengths to remove any visual references to collectivist ideas (i.e. Gropius' "conveyor belt") from Törten. By temporarily encouraging the individualism of modern consumer culture, the government hoped to align people with their totalitarian agenda.
Relating this back to theorginal question, I would argue that this point in time marks an overlap between the two definitions of individualism described above. The first definition, based in the image of the violent, energetic male of the Futurist movement is continued in the cult of personality of the fascist state. The second definition, based in the idea of individual "consumer freedom" is deployed here as a propaganda tool to achieve the Nazis' political agenda.</p>
<h2 id="alterations-after-the-second-world-war">Alterations after the Second World War</h2>
<p>Immediately after the war, a number of buildings in Törten had to be rebuilt - Dessau, being the site of a Junkers aeroplane factory had become a bombing target. Since little building material was available, many original components were re-used.
As the economic situation stabilises, the conflict between the collectivist ideals of modernism and individualism as defined to post-war comsumerism becomes more prevalent — even as Törten exists within the socialist regime of the GDR. As the economic situation stabilises in the 1950s, people's focus turnes toward expanding their living space. Houses were expanded into the garden (which was no longer needed for food production), and in many cases the roof terrace in the back was enclosed to create an additional room (Schwarting, 2011).</p>
<p><img src="https://maxkohler.com/assets/unit-9/torten-antennas-edit.jpg" alt="Törten Aerials" />
<strong>Figure 7:</strong> 1965 photograph showing a row of houses in Törten equipped with long aerials to receive West-German television</p>
<p>One of the more memorable images from this time (fig. 7) shows a row of houses, each equipped with a tall aerial used to illegally receive West-German television. This is in a way a reversal of the earlier power structure: Rather than implementing a set of political ideals from the top down through architecture (as Gropius had done), individuals are making an architectural intervention to subvert restrictions imposed by a socialist state. According to Husemann (2016), this was a fairly widespread act of disobedience against the government, with up to 85% of the GDR population receiving West-German television.
In terms of the conflict between collectivism and individualism, I read this as a victory of the latter: Individual residents, empowered by increasing wealth and education are satisfying their growing demand for diverse entertainment in an act of direct action against an authoritarian state.
During this period the use of the gardens also changes substantially. As the need for individual food production becomes less pronounced, the gardens take up a more recreational role. The space is also used to accomodate the increasing number of private cars in the settlement. Car ownership in the GDR increased drastically from 0.2 cars per 100 housholds in 1955 to 40 in 1982 (Edwards, 1985). In response to this, many home-owners build garages and carports along the back edge of their property, transforming the gravel path between opposing gardens into a secondary road. Like earlier alterations, these additions appear to be largely done on an individual basis, forming an ecclectic array of architectural styles.
Again, the spread of cars as a means of individualised transport and personal expression can be read as a victory of modern consumerism over the collectivist ideas of the 1920s.</p>
<h2 id="alterations-after-1989">Alterations after 1989</h2>
<p>Another significant set of changes to Törten come after the collapse of the GDR, which suddenly gives residents access to an abundance of tools and building materials through DIY-retail, which had grown to a substantial industry since the 1960s. The early 1990s coincide with a period of increased globalised competition, forcing retailers to drop prices and make products accessible to a wider group of consumers (Gelber, 1999).</p>
<p>Figures 8 through 12 show the variety of doors, windows, house numbers, landscaping, facade materials and seasonal decorations that define the Törten settlement today. Figure 8 shows one of two dwellings that remain almost entirely in the original 1928 condition — Compared with Figures 9 through 11, it illustrates the degree of visual diversification that has taken place over the last 90 years.</p>
<p><img src="https://maxkohler.com/assets/unit-9/house-1.jpg" alt="Dessau House 1" />
<strong>Figure 8:</strong> Photograph showing the house at 38 Mittelring, one of two dwellings remaining close to the original condition.</p>
<p><img src="https://maxkohler.com/assets/unit-9/house-2.jpg" alt="Dessau House 2" />
<strong>Figure 9:</strong> Photograph showing a dwelling in Dessau-Törten with layers of architectural alterations including facade material, roofing, windows and landscaping.</p>
<p><img src="https://maxkohler.com/assets/unit-9/house-3.jpg" alt="Dessau House 3" />
<strong>Figure 10:</strong> Photograph showing a dwelling in Dessau-Törten with layers of architectural alterations including facade material, roofing, windows and landscaping.</p>
<p><img src="https://maxkohler.com/assets/unit-9/house-4.jpg" alt="Dessau House 4" />
<strong>Figure 11:</strong> Photograph showing a dwelling in Dessau-Törten with layers of architectural alterations including facade material, roofing, windows and landscaping.</p>
<p>There is a case to be made that these alterations are in some sense a continuation of Gropius' idea of modular architecture. DIY-retail makes mass-manufactured tools and building components accessible to large parts of the population. Produced in standard dimensions and in numbers Gropius' couldn't have imagined, these components can largely be assembled by homeowners with little specialist knowledge. Gropius' construction method, in which only the side walls are load-bearing makes chnages to the floor plan relatively easy. Though contrary to Gropius vision of a dwelling that responded to the functional needs of its residents, the majority of the most recent changes are acts of individual cultural expression in line with Slater's (1997) definition of consumerism.</p>
<p><img src="https://maxkohler.com/assets/unit-9/dessau-doors-web.jpg" alt="Dessau-Törten Doors" />
<strong>Figure 12:</strong> Photographs showing the variety of front doors in Dessau-Törten. The top left photograph shows the door of 35 Doppelreihe, which is the only original 1920s door remaining in the settlement.</p>
<p><img src="https://maxkohler.com/assets/unit-9/dessau-map.jpg" alt="Dessau tourist signage" />
<strong>Figure 13:</strong> Signage in Dessau-Törten pointing to (from top to bottom): Hannes Meyer's Konsum Building, various points of architectural significance, an orthopedic clinic and a pharmacy. The structure in the background is the electrical tower at the centre of the settlement.</p>
<p>With the increasing public recognition of the Bauhaus and its architectural output in the 1980s, public education has become a permanent part of the Törten settlement. In 1992 the first house was restored to its original state by a private foundation. Since then, a number of houses have been restored to various degrees by either the local government or their respective owners (bauhaus-dessau.de, 2017). (The notion of "original state" here is debatable — it is often impossible to discern exactly which materials and construction methods were used during construction).</p>
<p>This latest development can be read as a reponse to the emergence of the post-industrial society, in which services (including health and education) replace manufacturing as the primary means of generating capital. This transition is perhaps epitomised by the large sign which now stands in front of the cenral electrical tower in Törten (fig. 14). The top two signs point to a permanent exhibition documenting the history of the estate (housed since 2011 in Hannes Meyer's Konsum Building) and various architectural points of interest in the settlement, respectively (Moller, 2017). The bottom two signs point to an orthopedic clinic and a pharmacy, serving the largely retirement-age community of Törten. This neatly sums up the developments we'll examine in the following chapter, turning away from Törten to more recent forms of collectivist housing.</p>
<h2 id="collectivist-housing-models-in-the-post-industrial-society">Collectivist housing models in the post-industrial society</h2>
<p>Modernist social housing was in many ways a response to the emergence of a new social class: the industrial worker. After the First World War rising industrial towns like Dessau attracted a large number of workers, leading to a sudden population increase the existing housing and infrastructure was unfit to handle. This led to the poor living conditions modernist architects recognised and attempted to improve through large-scale urban developments. As Bell (1999) shows, we are now at a similar moment of transition — from an industrial society to a post-industrial one. According to Bell, this transition is marked by a number of factors: <em>Sercives</em> replace agriculture and manufacturing as the primary source of employment ("services" meaning transportation and logistics, as well as education and healthcare). Further, the importance of physical infrastructure (roads, trains) decreases as <em>intellectual infrastructure</em> (internet connections, computing power) becomes critical to economic productivity.</p>
<p>As the industrial revolution created the industrial worker, the transition to a post-industrial society is likely to create new classes of people with new housing needs.</p>
<p>We can see this process happening in front of us already. Simpson (2015) describes a new class of people created entirely by scientific progress and the abundance of services in the post-industrial age: the "Young-Old". First described by Bernice Neugarten in 1974, the Young-Old are people at retirement age with higher purchasing power, higher likelihood living independently from their families, more education and better health status than those previously in their age group. According sociologist Andrew Blaikic (1999, cited in Simpson, 2015, p. 13), the Young-old might be the first large demographic group in history "whose daily experience [does] not consist of work or schooling [...]"</p>
<p>According to Simpson, the emergence of the Young-Old can be linked to two factors aligned with Bell's analysis of the post-industrial society. First, advances in public health, nutrition and the decline of manual labour has led to an increased life expectancy in both industrial and developing countries — doubling from 40 years in 1840 to 80 years in 2000. Medical products developed over the last century, such as the artificial hip, the contact lens, electronic hearing aids and Viagra extend the physical capabilities of the body. This increased importance of the medical sector is in line with Bell's model, which describes a shift to a primarily service-based economy. Secondly, the decline of the multi-generational family creates a class of retirees that has to be more self-reliant than previous generations. Again this can be linked to Bell's description of post-industrialism: "Intellectual Work" can be done independently from any physical location. This in turn creates what Simpson describes as the "[increased] mobility requirements of the modern workforce, which drives people apart geographically" (Simpson, 2015, p. 45).</p>
<p>While in the 19th century the invention of the steam engine and Fordist production methods brought about the industrial worker, it is scientific progress and the shift from industrial to intellectual work that leads to the emergence of the Young-Old. They have the desire, health and financial means to live independently, which makes the institutional homes of previous generations unfit for their needs. At the same time, shifts in the labour market and changes in family values have made the traditional model of the multi-generational household unavailable (or undesirable). The only response to this new class of people would appear to be the development of new housing models.</p>
<p>Simpson describes a number of these new models, the most striking of which is perhaps the "The Villages", a vast retirement community outside Orlando, Florida. With a population of 119.000 in 2016 (Schneider, 2016), The Villages are not only the largest retirement community in the world, but America's fastest growing city overall. In many ways, they bear a striking resemblance to the social housing developments of the 1920s: Houses are mass-produced and laid out following a preconceived plan. Transportation, architectural vernacular, healthcare, local history, media and (by way of age and income segregation) the social fabric of the settlement are part of a "Gesamtkunstwerk" of a scope far beyond what the modernists were able to do. Here the hero architect of the 1920s is replaced by the faceless, owner-less development corporation of post-war capitalism.</p>
<p>The developers of The Villages deploy a complex architectural system to create a specific "lifestyle experience". TO mask the industrial scale of the settlement, it is split up into smaller "towns", each with its own, entirely fabricated local history. The developers use techniques developed in theme parks to artificially age buildings, and even go so far as to install fictional historical artifacts and historical plaques to create a feeling of local history. All of this is done (to some success) to encourage the kind of social interaction between residents that represents an idealised idea of life in a small town. The designers are surprisingly candid about this:</p>
<blockquote>
<p>[...] We write storylines that we use, and that comes from my theme park background. The storyline acts as your concept, you go back to it to design facades. It's funny, we make up stories for some of these buildings and some of the residents think they're real. They don't even know any better because we even go to the trouble of paining old graphics on the buildings and people think that's an old general store when it really isn't. (Simpson, 2015 page 207)</p>
</blockquote>
<p>The primary means of transportation in the villages is the golf cart. This is another specific design decision by the developer: The golf cart bridges the gap between the automobile (which is costly, associated with the working life and requires a licence, which residents may have lost or never acquired in the first place) and the mobility scooter (which is slow and associated with the frailty of old age). The developer encourages golf cart use by making specific golf cart roads, bridges and tunnels part of the urban planning, and incorporating golf cart-related events (such as parades) into the social programme. This is reiterated by The Village's marketing material which often describes points of interest being "only a short golf cart ride away" (The Villages, 2017).</p>
<p>We find the notion of using architectural intervention to design social interaction on a much smaller scale in the work of Hannes Meyer (who succeeded Gropius as Bauhaus director in 1928): His ADGB Trade Union School near Berlin (1928-1930) is designed to break students into groups of four — this was thought to be the ideal number to accomodate learning.</p>
<p>In another echo of the 1920s, we find the notion of heliotherapy reflected The Villages marketing material, for instance these song lyrics from a 2011 video advert showing landscape shots and residents interacting in bright sunlight:</p>
<blockquote>
<p>It's a little slice of paradise / Sunshine and golf galore / Neighbours stroll the old town square / And the good life is in store / The Villages / Where the sun shines all year round / The Villages / Florida's friendliest hometown / From our family to yours / From our family to yours / Come on / Come on down / We're Florida's friendliest hometown</p>
</blockquote>
<p>(The Villages Florida, 2011).</p>
<p>How do the villages reconcile the confilct between their essentially collectivist design with their residents' desire for individualism (which compelled them to move out of the multigenerational household in the first place)? I would argue that the developer achieves this by what is essentially marketing. They emphasise the benefits of the collectivist settlement (centralised access to healthcare, unified transportation, social cohesion) while creating the perception of individual freedom and self-governance — sometimes in the same TV commercial.</p>
<p><img src="https://maxkohler.com/assets/unit-9/villages-golf-carts.jpg" alt="The Villages Golf Carts" />
<strong>Figure 14:</strong> Photograph showing customised golf carts in the villages displayed as part of a developer-supported christmas parade.</p>
<p>I would argue that part of the reason this succeeds is the overwhelming scale of The Villages: With 2.400 organised clubs (one for every 65 residents), hundreds of sport facilities and thousands of planned events <em>a month</em> (125 on the day of this writing alone) the developers are able to create an environment in which individual choice seems unlimited. This perception of individual freedom is reinforced through carefully planned pockets of individual expression, such as residents decorating their golf carts (fig. 14) — perhaps analogous to the facade coverings, landscaping and seasonal decorations used as a means of cultural expression in Törten. In cases like this, the developer gives up direct design control — though the results are fed back (in the form of developer-sponsored parades and features in developer-funded local media) into the larger lifestyle experience the developer is attempting to sell.</p>
<h2 id="conclusion">Conclusion</h2>
<p>We've established that the conflict between individualism and the collectivist ideals of modernism is twofold, depending on how it is framed. Following the feminist critique of modernism, we've seen how the figure of the male hero architect stands in conflict with the egalitarian aims of the modernist movement. We've seen this conflict played out in Walter Gropius' Dessau-Törten housing settlement, where he makes design decisions aimed at showing his vision of the future rather than addressing the pragmatic needs of the present. This is reconciled by individual residents reverting their houses back to earlier construction methods (by replacing steel windows with wooden ones and erecting brick facades).</p>
<p>In a strange overlap between our two definitions of individualism, the Nazi government removes many of the visual clues representing collectivist ideas in Törten. However this is in line with the contradictory methods of Nazi propaganda, which leverages early forms of consumerism to make the population more receptive to their authoritarian agenda.</p>
<p>After the Second World War, the conflict between individualism and collectivism shifts from inside the modernist movement to the outside world and the emerging figure of the modern consumer. The architectural interventions in Törten are evidence of a decade-long negotiation between the collectivist vision of modernism and people's post-Fordist demand for cultural self-expression.</p>
<p>The transition to a post-industrial society has created a demand for new forms of housing. The Villages of Florida are a vivid example of this. With their heavily programmed lifestyle, centralised medical care, transportation and mass-produced housing, they might be described as the urban manifestation of a kind of "leisure socialism" (Simpson, 2015, p 246). Here the figure of the hero architect of the 1920s is replaced by the owner-less corporation of the 1980s.</p>
<p>I will end by arguing that the continued transition to a post-industrial economy will likely create more demand for new forms of housing. A re-examination of modernist ideas of social housing might be part of this debate, in line with a broader re-examination of socialist ideas following a disillusionment with the results of economic liberalism (which led to the rejection of modernst housing models in the first place), especially among younger generations. Shrimpton et. al (2017) sums up this growing sense of anxiety, showing that</p>
<blockquote>
<p>[...] Britons no longer think young people will have a better life than previous generations, with only around one quarter (23 per cent) of adults taking this view. Instead, roughly half (48 per cent) believe that millennials will have a worse life than their parents.</p>
</blockquote>
<p>Indeed, emerging writers like Owen Hatherley ("Militant Modernism", 2009) and researchers like Peter Chadwick ("This Brutal World", 2016) are already working to change the public perception of modernist housing. This development can only be welcomed. However I would take the position that in addition to a re-examination of modernist housing models, entirely new modes of living might be needed. In an economy that relies increasingly on intellectual labour done by a geographically independent workforce, perhaps this offhand remark made by Gropius in 1925 (six years before the first caravan was manufactured in Germany) might gain new importance (Gunkel, 2011):</p>
<blockquote>
<p>Perhaps "mobile living-shells" ["mobile Wohngehause"], allowing us to take with us all the conveniences of a real [traditional] living standard even through relocations, are no longer too far-fetched a utopia. (Gropius, 1925 p5 )</p>
</blockquote>
<h2 id="list-of-figures">List of Figures</h2>
<div class="footnotes" markdown="1">
- **Figure 1:** Hans Richter (1930). *Die Neue Wohnung* [Video Stills]. Available at: [https://www.youtube.com/watch?v=gAUhQHRANj4] (Accessed November 17, 2017)
- **Figure 2:** Unidentified Artist, Walter Gropius (1925). *Illustration showing the concept of Gropius' 'Large-Scale Building Blocks'* [Illustration]. In Gropius, W. (1925) *Bauhausbuch 3: Ein Versuchshaus des Bauhauses in Weimar*. Weimar: Verlag der Bauhaus-Universität Weimar, page 6.
- **Figure 3:** Unidentified Photographer (ca. 1926). *Housing Development, Dessau-Törten* [Photograph]. Available at: [https://www.harvardartmuseums.org/collections/object/52799?position=65] (Accessed November 17, 2017)
- **Figure 4:** Unidentified Artist (ca. 1928). *Waterfall chart showing the order of construction phases in Dessau-Törten* [Chart]. Available at [https://www.harvardartmuseums.org/collections/object/30157?position=128] (Accessed November 20, 2017)
- **Figure 5:** Unidentified Photographer (ca. 1928). *Housing development Dessau-Törten: Rooftop view of the garden side of row houses* [Photograph]. Available at: [http://www.harvardmuseums.org/collection/object/169050?position=24] (Accessed November 20, 2017)
- **Figure 6:** Unidentified Photographer (ca. 1926). *Housing development Dessau-Törten: Row houses under construction* [Photograph]. Available at: [http://www.harvardmuseums.org/collection/object/53030?position=91] (Accessed November 12, 2017)
- **Figure 7:** Unidentified Photographer (ca. 1965). *Row houses in Dessau-Törten* [Photograph] in Schwarting, A (2011) *Das Verschwinden der Revolution in der Renovierung*. Berlin: Gebr. Mann Verlag, page 58.
- **Figure 8:** The author (2017). *Example of a typical row house in Dessau-Törten* [Digital Photograph].
- **Figure 9:** The author (2017). *Example of a typical row house in Dessau-Törten* [Digital Photograph].
- **Figure 10:** The author (2017). *Example of a typical row house in Dessau-Törten* [Digital Photograph].
- **Figure 11:** The author (2017). *Example of a typical row house in Dessau-Törten* [Digital Photograph].
- **Figure 12:** The author (2017). *Composite photograph showing examples of front doors in Dessau-Törten* [Digital Photographs, Composite].
- **Figure 13:** The author (2017). *Signage in Dessau-Törten* [Digital Photograph].
- **Figure 14:** Currie, C. (2013). *Customised golf carts as part of the '2013 The Villages Golf Carts Parade'* [Digital Photograph]. Available at: [https://photonews247.com/tag/christmas-decorated-golf-cart-the-villages-fl/] (Accessed November 27, 2017)
</div>
<h2 id="references">References</h2>
<h3 id="visits">Visits</h3>
<div class="footnotes" markdown="1">
- Bauhaus Dessau (November 4, 2017). Dessau, Germany (Permanent)
- Housing Settlement Dessau-Törten (November 4-5, 2017). Dessau, Germany (Permanent)
</div>
### Online Sources
<div class="footnotes" markdown="1">
- Bauhaus100.de. (2017). *Sommerfeld House, Berlin*. [online] Available at: https://www.bauhaus100.de/en/past/works/architecture/haus-sommerfeld/ [Accessed 16 Nov. 2017].
- Moller, W. (2017). Email to Werner Moller, November 22, 2017.
- Bauhaus-dessau.de. (2017). Dessau-Torten Housing Estate by Walter Gropius (1926-28). [online] Available at: http://www.bauhaus-dessau.de/index.php?en/architecture/bauhaus-buildings-in-dessau/dessau-toerten-housing-estate [Accessed 2 Dec. 2017].
- Wilder, C. (2016). On the Bauhaus Trail in Germany. The New York Times, [online] p.TR1. Available at: https://www.nytimes.com/2016/08/14/travel/bauhaus-germany-art-design.html [Accessed 16 Nov. 2017].
- Becket, A. (2016). *The fall and rise of the council estate*. The Guardian [available at https://www.theguardian.com/society/2016/jul/13/aylesbury-estate-south-london-social-housing]
- Die Neue Wohnung. (1930). [film] Directed by H. Richter. Basel: WOBA. [available at https://www.youtube.com/watch?v=gAUhQHRANj4]
- Husemann, R. (2016). Geistige Grenzgänger. [online] Süddeutsche.de. Available at: http://www.sueddeutsche.de/politik/westfernsehen-in-der-ddr-geistige-grenzgaenger-1.3010277 [Accessed 1 Dec. 2017].
- Schneider, M. (2016). The Villages is nation's fastest growing, again. [online] OrlandoSentinel.com. Available at: http://www.orlandosentinel.com/news/lake/os-ap-the-villages-is-nations-fastest-growing-20160324-story.html [Accessed 1 Dec. 2017].
- Shrimpton, H., Skinner, G. and Hall, S. (2017). The Millennial Bug: Public attitudes on the living standards of different generations. [PDF] Resolution Foundation. Available at: http://www.resolutionfoundation.org/app/uploads/2017/09/The-Millennial-Bug.pdf [Accessed 2 Dec. 2017].
</div>
<h3 id="books%2C-articles">Books, Articles</h3>
<div class="footnotes" markdown="1">
- Bell, D. (1973). The coming of post-industrial society: a venture in social forecasting. New York: Basic Books.
- Blum, C. (2014). Marvellous Masculinity: Futurist Strategies of Self-Transfiguration through the Maelstrom of Modernity. In: N. Lusty and J. Murphet, ed., Modernism and Masculinity. Cambridge: Cambridge University Press.
- Bushart, M. (2009). At the Beginning, a misunderstanding: Feininger's Cathedral and the Bauhaus Manifesto. In: M. Siebenbrodt, J. Wall and K. Weber, ed., Bauhaus: A Conceptual Model. Ostfildern: Hatje Cantz
- DeKoven, M. (2011). Modernism and Gender. In: M. Levenson, ed., The Cambridge Companion to Modernism. [online] Cambridge: Cambridge University Press. Available at: https://play.google.com/store/books/details?id=NMynAgAAQBAJ&hl=en [Accessed 18 Nov. 2017].
- Droste, M. (2015). *Bauhaus*. Köln: Taschen.
- Edwards, G. (1985). *GDR society and social institutions*. London: Macmillan.
- Engels, H. (2001). *Bauhaus-Architektur*. München: Prestel.
- Gropius, W. (1925). *Bauhausbuch 3: Ein Versuchshaus des Bauhauses in Weimar*. München: A. Langen.
- Gropius, W. (1930). *Bauhausbuch 12: Bauhausbauten Dessau*. München: A. Langen.
- Schwarting, A. (2011). *Das Verschwinden der Revolution in der Renovierung*. Berlin: Gebr.Mann Verlag.
- Schwarting, A. (2012). *Die Siedlung Dessau-Törten*, 1926 bis 1931. Leipzig: Stiftung Bauhaus Dessau.
- Simpson, D. (2015). *The Young-Old: Urban Utopias of an Ageing Society*. Zurich: Lars Muller Publishers
- Teige, K. and Dluhosch, E. (2002). *The Minimum Dwelling*. Cambridge: The MIT Press.
- Wilk, C. (2008). *The healthy body Culture* in "Modernism: Designing a New World". London: V&A Publications.
- Lodder, C. (2008). *Searching for Utopia* in "Modernism: Designing a New World". London: V&A Publications
- Wolfe, T. (1985). *From Bauhaus to our house*. New York: Picador.
- Newman, O. (1972). *Defensible space: People and Design in the Violent City*. Urban Design International, Volume 1, 1996.
- Steventon, G. (1996). *Defensible space: A critical review of the theory and practice of a crime prevention strategy*. Urban Design International, Volume 1, 1996.
- Gelber, S. (1997). Do-It-Yourself: Constructing, Repairing and Maintaining Domestic Masculinity. American Quarterly, [online] 49(1), pp.66-112. Available at: https://muse-jhu-edu.arts.idm.oclc.org/article/2269 [Accessed 29 Nov. 2017].
- Gropius, W. (1919). Bauhaus Manifesto. [Manifesto] Bauhaus-Archiv Berlin (https://www.bauhaus100.de/en/past/works/education/manifest-und-programm-des-staatlichen-bauhauses/), Berlin.
- Koepnick, L. (1999). Fascist Aesthetics Revisited. Modernism/modernity, [online] 6(1), pp.51-73. Available at: https://muse.jhu.edu/article/23257 [Accessed 19 Nov. 2017].
- Le Corbusier and Goodman, J. (1927). Toward an Architecture. 2nd ed. Los Angeles: Getty Publications.
- Loos, A. (1910). Ornament and Crime.
- Marinetti, F. (1909). The Founding and Manifsto of Futurism. [Manifesto] University of California, Los Angeles (http://classes.dma.ucla.edu/Winter05/25-1/projects/cayla/week8/manifesto-set.pdf), Los Angeles.
- Miles, S. (2006). Consumerism as a Way of Life. London: Sage Publications.
- Newman, O. (1972). Defensible space: People and Design in the Violent City. London: Architectural Press.
- Slater, D. (1997). Consumer culture and Modernity. Cambridge: Polity Press.
- Ward, S. (2005). The Garden City: Past, present and future. Abingdon: Routledge.
</div>A nice Wordpress development setup2017-12-03T22:00:00Zhttps://maxkohler.com/posts/2017-12-02-wordpress-development-setup/<p>This setup is all command line based, but once you get used to it's <em>much</em> nicer than the <a href="https://www.apachefriends.org/index.html">XAMPP</a>-based workflow I had before.</p>
<h3 id="vvv">VVV</h3>
<p>All my projects run on <a href="https://varyingvagrantvagrants.org/">VVV</a>, which (as far as I understand) is a wrapper around <a href="https://maxkohler.com/posts/2017-12-02-wordpress-development-setup/">Vagrant</a> and <a href="https://maxkohler.com/posts/2017-12-02-wordpress-development-setup/">Virtualbox</a>. You install those two first, then pull down the VVV repo following <a href="https://varyingvagrantvagrants.org/docs/en-US/installation/">these instructions</a>. Once that's done, you can run</p>
<pre><code>vagrant up
</code></pre>
<p>which spins up a virtual machine with a special Unix version that has all the stuff Wordpress needs to function - PHP, MySQL and whatever else. <a href="https://wpbeaches.com/update-varying-vagrant-vagrants-vvv/">Updating VVV can be a bit finicky</a>.</p>
<p>To start a new Wordpress project, you open the <code>vvv-custom.yml</code> file and add an entry like this:</p>
<pre><code>my-site:
repo: https://github.com/Varying-Vagrant-Vagrants/custom-site-template.git
site_title: "My Cool Website"
hosts:
- my-cool-site.test
</code></pre>
<p><a href="https://varyingvagrantvagrants.org/docs/en-US/adding-a-new-site/">The documentation goes into more detail on this</a>. Then you run <code>vagrant up --provision</code>, which goes through your <code>vvv-custom.yml</code> file and sets up a fresh Wordpress install for each site you've configured.</p>
<p>The default domain extension used to be <code>.dev</code>, but apparently <a href="https://github.com/Varying-Vagrant-Vagrants/VVV/issues/583">Google has bought that</a>, which leads to all sorts of problems. I have a bunch of sites still configured to <code>.dev</code> domains, but it looks like <a href="https://github.com/Varying-Vagrant-Vagrants/VVV/issues/583#issuecomment-332046448">the migration is non-trivial</a>. So I'm going to leave my existing sites for now until something breaks.</p>
<h2 id="browsersync">Browsersync</h2>
<p>I use <a href="https://browsersync.io/docs/grunt">Grunt Browsersync</a> so I don't have to refresh the page when I'm working (it also does CSS injection and other neat things). You can point it to the VVV domain in your <code>gruntfile.js</code> using the <code>proxy</code> option:</p>
<pre><code>browserSync: {
dev: {
bsFiles: {
src : 'assets/css/style.css'
},
options: {
proxy: "my-cool-site.test"
}
}
}
</code></pre>
<p>Works like a charm.</p>
<h2 id="updating-wordpress-using-wp-cli">Updating Wordpress using WP-CLI</h2>
<p>Another neat thing you can do is update plugins, themes and Wordpress itself right from the command line using <a href="http://wp-cli.org/">WP-CLI</a>. That feels much nicer to me than clicking around the Wordpress admin.</p>
<p>The first thing you need to do is <code>ssh</code> into your virtual machine:</p>
<pre><code>vagrant ssh
</code></pre>
<p>On my Windows machine I have to do this in Git Bash because that comes with an SSH client. Then you <code>cd</code> into the folder that belongs to whichever site you're working on:</p>
<pre><code>cd srv/www/my-cool-site/
</code></pre>
<p>Then you can run this and walk away while your site updates itself:</p>
<pre><code>wp core update ; wp plugin update --all; wp theme update --all
</code></pre>
<p><em>How nice is that.</em> WP-CLI has <a href="https://developer.wordpress.org/cli/commands/">a lot more options</a> to make finer-grained changes if you need to.</p>
<h2 id="installing-plugins-with-wp-cli">Installing plugins with WP-CLI</h2>
<pre class="language-bash"><code class="language-bash">wp plugin <span class="token function">install</span> custom-post-type-ui <span class="token parameter variable">--activate</span>
wp plugin <span class="token function">install</span> timber-library <span class="token parameter variable">--activate</span>
wp theme <span class="token function">install</span> https://github.com/timber/starter-theme/archive/master.zip
wp theme activate starter-theme</code></pre>
<h2 id="todo">Todo</h2>
<ul>
<li>While my themes are on Github, I haven't found a compelling way to deploy from there. Might be worth just paying for a service like <a href="https://ftploy.com/">FTPloy</a> that does it for you.</li>
<li>I will need to buy <a href="https://deliciousbrains.com/wp-migrate-db-pro/">WP Migrate DB Pro</a> at some point to migrate data between my local Wordpress install and the live version.</li>
</ul>
LaTeX Recipes2017-12-03T22:00:00Zhttps://maxkohler.com/posts/2017-12-07-notes-on-latex/<p>I spent at least a day and a half getting all my figures numbered correctly, making sure my bibliography was formatted correctly, all my citations were in the right format. <em>Numbering things, making sure data is in the right format</em> - sounds like something a <em>computer</em> would be good at.</p>
<p>Turns out there is software to do precisely this, and it's been around for 20 years: it's called LaTex.</p>
<h2 id="bold%2C-italic">Bold, Italic</h2>
<pre class="language-latex"><code class="language-latex">The link between <span class="token function selector">\textit</span><span class="token punctuation">{</span>consumer choice<span class="token punctuation">}</span> and political freedom is especially pronounced in the 1980s</code></pre>
<pre class="language-latex"><code class="language-latex">The link between <span class="token function selector">\textit</span><span class="token punctuation">{</span>consumer choice<span class="token punctuation">}</span> and political freedom is especially pronounced in the 1980s</code></pre>
<h2 id="images">Images</h2>
<pre class="language-latex"><code class="language-latex"><span class="token function selector">\begin</span><span class="token punctuation">{</span><span class="token keyword">figure</span><span class="token punctuation">}</span><span class="token punctuation">[</span>h<span class="token punctuation">]</span>
<span class="token function selector">\includegraphics</span><span class="token punctuation">[</span>width=<span class="token function selector">\textwidth</span><span class="token punctuation">]</span><span class="token punctuation">{</span>./images/die-neue-wohnung-web.jpg<span class="token punctuation">}</span>
<span class="token function selector">\caption</span><span class="token punctuation">{</span>Video stills from 'The New Dwelling' <span class="token punctuation">[</span>'Die Neue Wohnung'<span class="token punctuation">]</span>, a 1930 film showing the benefits of modernist housing<span class="token punctuation">}</span>
<span class="token function selector">\label</span><span class="token punctuation">{</span><span class="token keyword">fig:universe</span><span class="token punctuation">}</span>
<span class="token function selector">\end</span><span class="token punctuation">{</span><span class="token keyword">figure</span><span class="token punctuation">}</span></code></pre>
<p><code>h</code> is a label that controls where in the document the figure will show up. <code>h</code> will put it where it appears in the source document. <code>ht</code> puts it at the top of the page. There's loads of other options</p>
<h2 id="sections">Sections</h2>
<pre class="language-latex"><code class="language-latex"><span class="token function selector">\section</span><span class="token punctuation">{</span><span class="token headline class-name">Section Title</span><span class="token punctuation">}</span>
<span class="token function selector">\subsection</span><span class="token punctuation">{</span><span class="token headline class-name">Sub Section Title</span><span class="token punctuation">}</span></code></pre>
<h2 id="citations">Citations</h2>
<p>All your references go in a seperate file in this format:</p>
<pre class="language-latex"><code class="language-latex">@online<span class="token punctuation">{</span>wilder,
author = "Charly Wilder",
year = "2016",
institution = "The New York Times",
note = "Accessed Nov. 14, 2017",
title = "On the Bauhaus Trail in Germany",
url = "https://www.nytimes.com/2016/08/14/travel/bauhaus-germany-art-design.html"
<span class="token punctuation">}</span></code></pre>
<p>This lets you keep a more information than will eventually end up in the bibliography, ie. the author's full name. In your actual text document, you <em>reference</em> entries in your bibliography file like this:</p>
<pre class="language-latex"><code class="language-latex">In a country still struggling to recover from the First World War, with violent revolutions going on in Europe and new technology changing every aspect of life, change seemed inevitable. <span class="token function selector">\autocite</span><span class="token punctuation">{</span>wilder<span class="token punctuation">}</span></code></pre>
<pre class="language-latex"><code class="language-latex">Popular critics such as <span class="token function selector">\textcite</span><span class="token punctuation">{</span>wolfe<span class="token punctuation">}</span> criticise modernist housing as being overly academic and fundamentally unfit for its purpose. </code></pre>
<p>Again, there are many more options</p>
<p>LaTeX is going to pull in any information it needs to do a correct citation from the bibliography style. This means you can easily change your citation format if you need to - since we've <em>seperated data from presentation</em>, we can recompile the document in a different format at any point.</p>
<h2 id="front-matter">Front Matter</h2>
<p>Since we've given LaTeX all sorts of information about our document, we can do neat things like this:</p>
<pre class="language-latex"><code class="language-latex"><span class="token function selector">\begin</span><span class="token punctuation">{</span><span class="token keyword">titlepage</span><span class="token punctuation">}</span>
<span class="token function selector">\maketitle</span>
<span class="token function selector">\end</span><span class="token punctuation">{</span><span class="token keyword">titlepage</span><span class="token punctuation">}</span></code></pre>
<pre class="language-latex"><code class="language-latex"><span class="token function selector">\tableofcontents</span></code></pre>
<pre class="language-latex"><code class="language-latex"><span class="token function selector">\listoffigures</span></code></pre>
<h2 id="headers-and-footers">Headers and Footers</h2>
<pre class="language-latex"><code class="language-latex"> <span class="token function selector">\pagestyle</span><span class="token punctuation">{</span>fancy<span class="token punctuation">}</span>
<span class="token function selector">\fancyhf</span><span class="token punctuation">{</span><span class="token punctuation">}</span>
<span class="token function selector">\fancyhead</span><span class="token punctuation">[</span>LE,RO<span class="token punctuation">]</span><span class="token punctuation">{</span><span class="token function selector">\nouppercase</span><span class="token punctuation">{</span><span class="token function selector">\leftmark</span><span class="token punctuation">}</span><span class="token punctuation">}</span>
<span class="token function selector">\fancyfoot</span><span class="token punctuation">[</span>LE,RO<span class="token punctuation">]</span><span class="token punctuation">{</span><span class="token function selector">\thepage</span><span class="token punctuation">}</span></code></pre>
Face recogintion, machine learning2018-01-17T19:15:00Zhttps://maxkohler.com/posts/2018-01-17-feret-database/<h2 id="january-2%2C-2018">January 2, 2018</h2>
<p>I managed to work my way through <a href="http://shop.oreilly.com/product/0636920052289.do">Hands-On Machine Learning with Scikit-Learn & Tensorflow</a> by Aurélien Géron. Some of the more advanced math is still beyond me (remembering how vectors work was hard enough), but I feel like I've now got an actual understanding of some of the acronymns that get thrown around a lot: Deep Learing, Neural Networks, MLP, TensorFlow and so on.</p>
<p>An important point that's made early in the book is that machine learning isn't the same thing as neural networks. Géron quotes Tom Mitchell (1997):</p>
<blockquote>
<p>A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.</p>
</blockquote>
<p>Basic methods like linear regression fall under this definition as well as neural networks.</p>
<p>Chapter 14, which deals with <em>Recurrent Neural Networks</em> is particularly exciting. As Géron points out,</p>
<blockquote>
<p>RNNs [are] a class of nets that can predict the future. [...] RNN's ability to anticipate also makes them capable of surprising creativity.</p>
</blockquote>
<p>Evidently this is the technology behind some of these <a href="https://magenta.tensorflow.org/">Google Magenta Experiments</a>. A later chapter in the book describes how you can train a neural network in such a way that given a set of source images, it can generate new images that look as real as the input images - exciting stuff. I'm hoping to do this with images of faces - generating portraits of people that don't exist.</p>
<p>However, I do suspect that the laptop I'm typing this on will have nearly enough processing power to do all of that. Finding enough source images will also be a concern. This describes the main problem with advanced machine learning: While the math is well established and relatively accessible, access to the vast amounts of processing power and training data required to build useful software is limited to large organisations.</p>
<h2 id="january-13%2C-2018">January 13, 2018</h2>
<p>I got my hands on something called the <em>FERET Database</em>. This is a collection of images of faces that the U.S military comissioned in the mid-nineties, containing about 11.000 images of roughly 800 individuals from different angles, wearing different clothes etc. It's what much of modern research into facial recognition algorithms has been based on. Here's the <a href="https://www.nist.gov/itl/iad/image-group/color-feret-database">relevant government website</a>.</p>
<p>The way you get this database is <em>emailing the US department of defence</em>. Once you do, they give you login details to download the database. It comes in a weird 90s format, so I had to spend some time extracting and converting the images so I could look at them.</p>
<p><img src="https://maxkohler.com/assets/ml/feret-grid.jpg" alt="FERET Images" />
Example images from the FERET database</p>
<p>I'm not sure what I'm going to do with these images. I could use them to train a neural network, but they're also an interesting artifact in themselves. They're essentially a time capsule from the campus of <a href="https://www2.gmu.edu/">George Mason University</a> in the 1990s - 90s haircuts etc. I also like the idea that these are images only ever intended for machines to look at. Also the fact that these are basically scientific documents created for a government agency, yet some of them are surprisingly artisitic.</p>
<p><img src="https://maxkohler.com/assets/ml/evidence-1977.jpg" alt="Evidence" />
Evidence (1977) by Larry Sultan and Mike Mandel. <a href="http://larrysultan.com/archives/wp-content/uploads/2013/06/EV_PP32_SULTAN_MANDEL_1977.jpg">Image Source</a></p>
<p>It reminds me of <a href="http://larrysultan.com/gallery/evidence/">Evidence (1977)</a> by Larry Sultan an Mike Mandel, where they took NASA research photographs, took them out of their original context and put them in a new order that tells a story.</p>
<h2 id="january-15%2C-2018">January 15, 2018</h2>
<p>Turns out <a href="http://www.paglen.com/">Trevor Paglen</a> did some <a href="https://qz.com/1103545/macarthur-genius-trevor-paglen-reveals-what-ai-sees-in-the-human-world/">work on the FERET images very recently</a>. The exhibition also includes machine-generated images and some original photography - all very succesful. I'll try and get an exhibition catalogue.</p>
<p>Paglen has been doing this work for a while. Other projects of his include <a href="https://libsearch.arts.ac.uk/cgi-bin/koha/opac-detail.pl?biblionumber=235160&query_desc=">Invisible : covert operations and classified landscapes</a>, a book on restricted government sites. Also <a href="https://books.google.co.uk/books/about/Blank_Spots_on_the_Map.html?id=oM8u2198DcsC&printsec=frontcover&source=kp_read_button&redir_esc=y#v=onepage&q&f=false">Blank Spots on the Map</a>, which is about how governments manipulate maps to hide what they're doing.</p>
<h2 id="january-16%2C-2018">January 16, 2018</h2>
<h3 id="readings">Readings</h3>
<ul>
<li>Paglen, T. (2016), <em><a href="https://thenewinquiry.com/invisible-images-your-pictures-are-looking-at-you/">Invisible Images (Your Pictures Are Looking at You)</a></em></li>
<li>Flusser, V. (1984), <em><a href="https://archive.org/details/FlusserVilemTowardsAPhilosophyOfPhotography1984">Towards A Philosophy Of Photography</a></em></li>
<li>Starnes, S. (2017), <em><a href="https://brooklynrail.org/2017/10/artseen/TREVOR-PAGLEN-A-Study-of-Invisible-Things">TREVOR PAGLEN: A Study of Invisible Images</a></em></li>
<li>Turk, Pentland (1991), <em><a href="http://cvrr.ucsd.edu/ece172a/fa10/projects/papers/eigenfaces_cvpr.pdf">Face Recognition Using Eigenfaces</a></em></li>
<li>Turk, Pentland (1991), <em><a href="https://s3.amazonaws.com/academia.edu.documents/30894770/jcn.pdf?AWSAccessKeyId=AKIAIWOWYYGZ2Y53UL3A&Expires=1516287716&Signature=jG1StnTWLzBpYVDBek7S%2Fi5vKD4%3D&response-content-disposition=inline%3B%20filename%3DEigenfaces_for_Recognition.pdf">Eigenfaces for Recognition</a></em></li>
</ul>
<p>Tracey suggests I go see an exhibition called <a href="http://www.arts.ac.uk/csm/whats-on-at-csm/lethaby-gallery/metadata/">Metadata - How we relate to images</a> at CSM - I've scheduled for Saturday.</p>
<p>I've spent some more time with the FERET database, going through the images, printing some of them and reading some of the related government reports:</p>
<ul>
<li>Phillips, Moon, Rizvi, Rauss (1999), <em><a href="http://ai2-s2-pdfs.s3.amazonaws.com/0f0f/cf041559703998abf310e56f8a2f90ee6f21.pdf">The FERET Evaluation Methodology for Face-Recognition
Algorithms</a></em></li>
<li>Phillips, Rauss, Der (1996), <em>FERET (Face Recognition Technology) Recognition Algorithm Development and Test Results</em></li>
</ul>
<p>The 1996 paper points out:</p>
<blockquote>
<p>Some questions were rasied about the age, racial, and sexual distribution of the database. However, at this stage of the program, the key issue was algorithm performance on a database of a large number of individuals.</p>
</blockquote>
<p>This might be an area worth exploring. The photos were collected by GMU, suggesting that most of the volunteers are probably students and university staff (not military emplyees as is sometimes suggested). In some sense the whole history of institutional recism and sexism might be baked into this database?</p>
<p>Might be good to run some analytics on gender / age / race distribution of the databse.</p>
<p>I'm still interested in how exactly these photography sessions were conducted - how did they recruit volunteers, whose office was turned into a studio, what did people at the time say about the program etc.</p>
<h2 id="january-17%2C-2018">January 17, 2018</h2>
<h3 id="readings-1">Readings</h3>
<ul>
<li>Bridle, J. (2013), <em><a href="https://medium.com/matter/how-britain-exported-next-generation-surveillance-d15b5801b79e">How Britain Exported Next-Generation Surveillance</a></em></li>
<li>Taigman, Yang, Ranzato, Wolf (2014), <em><a href="https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf?">DeepFace: Closing the Gap to Human-Level Performance in Face Verification</a></em></li>
<li>Pinheiro, Collobert, Dollar (2015), <em><a href="https://arxiv.org/pdf/1506.06204.pdf">Learning to Segment Object Candidates
</a></em></li>
</ul>
<p>Segune suggests two additional readings on photographic archives (after seeing the FERET images):</p>
<ul>
<li>Enwezor, O. (2008), <em><a href="http://moodle.arts.ac.uk/pluginfile.php?forcedownload=1&file=%2F%2F520513%2Fblock_quickmail%2Fattachment_log%2F172019%2Fenwezor-archive.pdf">Archive Fever: Photography between History and the Monument</a></em></li>
<li>Foster, H. (2004), <em><a href="http://moodle.arts.ac.uk/pluginfile.php?forcedownload=1&file=%2F%2F520513%2Fblock_quickmail%2Fattachment_log%2F172019%2FHal%20Foster_archival%20impulse.pdf">An Archival Impulse</a></em></li>
</ul>
<p><img src="https://maxkohler.com/assets/ml/richter.jpg" alt="Installation view of 48 Portraits by Gerhardt Richter" />
<a href="http://www.tate.org.uk/art/artworks/richter-48-portraits-ar00025">Tate Modern</a></p>
<p>She also points out <a href="http://www.tate.org.uk/art/artworks/richter-48-portraits-ar00025">48 Portraits (1971-98)</a> by Gerhardt Richter.</p>
<h3 id="notes-on-%22invisible-images-(your-pictures-are-looking-at-you)%22">Notes on "Invisible Images (Your Pictures are Looking at You)"</h3>
<p>On a basic level, Paglen argues that existing models of visual culture are becoming less relevant because the vast majority of images are now created by machines for other machines. This has to do with the fact that a digital image is <em>primarily</em> machine-readable. You can only make it visible to human eyes for a brief moment using additional software, screens etc.</p>
<p>The second main point is that images are no longer primarily used as representations. Instead, machines use images to make predictions, activate mechanisms and generally <em>actively change</em> the real world. In his words:</p>
<blockquote>
<p>Images have begun to intervene in everyday life, their functions changing from representation and mediation, to activations, operations, and enforcement. Invisible images are actively watching us, poking and prodding, guiding our movements, inflicting pain and inducing pleasure. But all of this is hard to see.</p>
</blockquote>
<p>Paglen cites a number of examples of this that have been in operation for years. These included cases where license plates are recognised and used to track people's movements and retail companies that analyse customers' facial expressions</p>
<p>He makes the point that places like Facebook are closely modelled on traditional notions of sharing images (using skeumorphic terms like <em>albums</em>, <em>slideshows</em>) but this is only true on the surface. Underneath, your photos are feeding highly developed machine learning algorithms designed to extract value from your images (now or in the future). As Paglen points out, you could easily imagine the license plate recognition case being expanded to include images people share on social media.</p>
<p>He closes by saying that the long-term solution to this needs to be regulation - "hacks" that might be effective against recognition algorithms today will loose their effectiveness over time.</p>
<blockquote>
<p>We no longer look at images - images look at us. They no longer simply represent things, but actively intervene in everyday life. We must begin to understand these changes if we are to challenge the exceptional forms of power flowing through the invisible visual culture that we find ourselves emeshed within.</p>
</blockquote>
<h2 id="january-20%2C-2018">January 20, 2018</h2>
<h3 id="notes-on-segune's-readings">Notes on Segune's Readings</h3>
<p>(She suggested these <a href="https://maxkohler.com/posts/2018-01-17-feret-database/#january-17-2018">a few days ago</a>)</p>
<h4 id="archive-fever%3A-photography-between-history-and-the-monument">Archive Fever: Photography between History and the Monument</h4>
<p>This cites an essay called <a href="https://www.jstor.org/stable/pdf/778312.pdf?refreqid=excelsior%3A53f6ebc3ba7c0f02e549d2dd321beee4">The Body and the Archive (1986)</a> by Allan Sekula, which talks about how photographic archives have been used as "an instrument of social control an differentiation underwritten by dubious scientific principles".</p>
<p><img src="https://maxkohler.com/assets/ml/bertillon.jpg" alt="Bertillon Archive" />
<a href="https://www.metmuseum.org/art/collection/search/289245?sortBy=Relevance&ft=alphonse+bertillon&offset=0&rpp=20&pos=1">The Metropolitan Museum of Art</a></p>
<p>Sekula talks about <a href="https://en.wikipedia.org/wiki/Alphonse_Bertillon">Alphonse Bertillon</a>, a French policeman who created a huge bullshit system to classify criminals based on their photographs of their faces. <a href="https://www.metmuseum.org/art/collection#!?q=alphonse%20bertillon&perPage=20&sortBy=Relevance&sortOrder=asc&offset=0&pageSize=0">The Met</a> seems to have a good collection of his stuff. The Science Museum has some of the <a href="http://collection.sciencemuseum.org.uk/search?q=Bertillon">instruments he used to measure various facial features</a>.</p>
<p>Similar archival projects to classify people along racial lines (The nazis were big fans).</p>
<blockquote>
<p>Their projects, Sekula writes, "constitute two methoological poles of the positivist attempts to define and regulate social deviance" The criminal (for Bertillon) and the racially inferior (for Galton) exist in the netherworld of the photographic archive, and when they do assume a prominent place in that archive, it is only to dissociate them, to insist on and illuminate their difference, their archival apartness from normal society</p>
</blockquote>
<p>Enwezor goes on to describe a number of examples where archives are used as a way to conserve power, present existing systems of oppression as natural etc.</p>
<ul>
<li>The Bush administration collected a huge archive of Iraqi documents, phone conversations, emails ("Intelligence") in the hopes of finding proof of WMD. When they couldn't find any, they made up a document showing that Iraq bought yellow cake.</li>
<li>Colonial Britain was obsessed with collecting records of all sorts - they fill places like the British Museum, the NHM etc. This was a way for Britain to establish control over far-away countries.</li>
</ul>
<h4 id="an-archival-impulse">An Archival Impulse</h4>
<h2 id="january-24%2C-2018">January 24, 2018</h2>
<p><a href="http://www.arts.ac.uk/csm/whats-on-at-csm/lethaby-gallery/lethaby-gallery-past-exhibitions/metadata/">MetaData at the Lethaby Gallery</a></p>
<div markdown="1" class="gallery">
<p><img src="https://maxkohler.com/assets/ml/csm1.jpg" alt="CSM" />
<img src="https://maxkohler.com/assets/ml/csm2.jpg" alt="CSM" />
<img src="https://maxkohler.com/assets/ml/csm3.jpg" alt="CSM" />
<img src="https://maxkohler.com/assets/ml/csm4.jpg" alt="CSM" /></p>
</div>
## January 25, 2018
<p>TODO Spoke to segune about feret images</p>
<h2 id="january-26%2C-2018">January 26, 2018</h2>
<p>TODO Jak tutorial, discussed ways of presenting face images</p>
<h2 id="january-27%2C-2018">January 27, 2018</h2>
<p>TODO decided to print feret images, looks like its expenive, need to talk to techinician, emailed tracey</p>
<h2 id="january-29%2C-2018">January 29, 2018</h2>
<p>TODO Peer assesment</p>
<h2 id="febuary-14%2C-2018">Febuary 14, 2018</h2>
<p><a href="https://en.wikipedia.org/wiki/Eigenface">Eigenfaces</a> are a way to represent images used in facial recognition software. First introduced by <a href="https://s3.amazonaws.com/academia.edu.documents/30894770/jcn.pdf?AWSAccessKeyId=AKIAIWOWYYGZ2Y53UL3A&Expires=1519322406&Signature=zUSN5N4wWx5J0GrqjQdFLMLJYto%3D&response-content-disposition=inline%3B%20filename%3DEigenfaces_for_Recognition.pdf">Turk and Pentland (1991)</a>. Below is figure 2 from that paper:</p>
<p><img src="https://maxkohler.com/assets/ml/eigen.png" alt="Eigenfaces" />
<a href="https://s3.amazonaws.com/academia.edu.documents/30894770/jcn.pdf?AWSAccessKeyId=AKIAIWOWYYGZ2Y53UL3A&Expires=1519322406&Signature=zUSN5N4wWx5J0GrqjQdFLMLJYto%3D&response-content-disposition=inline%3B%20filename%3DEigenfaces_for_Recognition.pdf">Turk, Pentland (1991)</a></p>
<p>Something intruiging about the aesthetics of research papers.</p>
<p><img src="https://maxkohler.com/assets/ml/eigenface_reconstruction_opencv.png" alt="More Eigenfaces" />
<a href="https://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html">OpenCV</a></p>
<h2 id="febuary-18%2C-2018">Febuary 18, 2018</h2>
<h3 id="another-face-database">Another Face Database</h3>
<p>The <a href="https://www.nist.gov/">National Institue for Standards and Technology</a> (which provides the <a href="https://maxkohler.com/posts/2018-01-17-feret-database/#january-13-2018">FERET Database</a>) also has something called the <a href="https://www.nist.gov/itl/iad/image-group/special-database-32-multiple-encounter-dataset-meds">Multiple Encounter Dataset (MED)</a>. This is a database containing 683 mugshots of deceased people used to develop facial recognition software. This is starting to get much closer to <a href="https://maxkohler.com/posts/2018-01-17-feret-database/#january-20-2018">Berillion</a>. I'm assuming by using photographs of dead people allows them to get around some privacy concerns. They've also removed (in some cases blacked out) any reference to the person's name or reason of arrest. So what you're left with is this archive of black and white photographs of people from the 60s, 70s and 80s (judging by the haircuts).</p>
<p><img src="https://maxkohler.com/assets/ml/mugshots.png" alt="Mugshots" />
<a href="https://www.nist.gov/itl/iad/image-group/special-database-32-multiple-encounter-dataset-meds">National Institute of Standards and Technology</a></p>
<p>With the images comes a datafile describing the photographs:</p>
<table class="dense">
<thead>
<tr>
<td>Subject</td>
<td>Encounter</td>
<td>Record</td>
<td>DOB</td>
<td>WGT</td>
<td>SEX</td>
<td>HGT</td>
<td>RAC</td>
<td>HAI</td>
<td>EYE</td>
<td>PHD</td>
<td>IMT</td>
<td>POS</td>
<td>VLL</td>
<td>HLL</td>
</tr>
</thead>
<tbody>
<tr><td>...</td></tr>
</tbody>
</table>
<p>Interestingly this contains fields for height (ie. 5'11) weight (in lbs.) and date of birth of the detainee.</p>
<h2 id="febuary-27%2C-2018">Febuary 27, 2018</h2>
<p><img src="https://maxkohler.com/assets/ml/eigenfaces.png" alt="Eigenfaces" /></p>
<p>Some more face databases. I'm thinking the reason these are all from the 90s is that research doesn't need this sort of standardised database anymore - People are now working with images collected from the internet. <a href="http://vis-www.cs.umass.edu/lfw/">Labelled Faces in the Wild</a> is an example. This has the benefit of being much cheaper than taking original photographs - you can create a database that is orders of magnitudes larger for the same amount of money. Examples:</p>
<ul>
<li><a href="http://megaface.cs.washington.edu/">MegaFace</a>, containing one million unlabelled faces. Introduced by <a href="https://arxiv.org/pdf/1512.00596.pdf">Kemelmacher-Shlizerman et. al, 2015</a></li>
<li><a href="http://www.cs.tau.ac.il/~wolf/ytfaces/">YouTube Face Database</a>, containing 3,500 videos of 1,500 people.</li>
<li><a href="http://vintage.winklerbros.net/facescrub.html">FaceScrub</a>, contains 100,000 images of 530 people. Paper by <a href="http://vintage.winklerbros.net/Publications/icip2014a.pdf">Ng, Winkler 2015</a></li>
</ul>
<p>Facebook research uses internal databases with millions of faces. Maybe there's something to this idea: Back in the day, collecting a database had to be a dedicated effort. Now, we're all contributing to face recognition algorithms (and other machine learning applications by way of our behaviour, movements, writing) involuntarily.</p>
<p><img src="https://maxkohler.com/assets/ml/att.gif" alt="AT&T Laboratory Database of Faces" />
<a href="http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html">AT&T Laboratories Cambridge</a></p>
<p class="hasImage" markdown="1">
<video playsinline="" muted="" loop="" controls="" autoplay="" src="https://maxkohler.com/assets/ml/rotation_101.mp4"></video>
[University of Surrey](http://www.ee.surrey.ac.uk/CVSSP/xm2vtsdb/)
</p>
<h2 id="march-6%2C-2018%3A-rnns">March 6, 2018: RNNS</h2>
<p>This might be a fun project to get into generating things with neural networks: <a href="https://developer.nytimes.com/">The New York Times has an API</a> that makes it really easy to get their content programatically. I pulled every article headline from January 2016 to present - about 4MB of text. <a href="https://github.com/sherjilozair/char-rnn-tensorflow">This Tensorflow setup</a> makes it trivial to train a character-based RNN on the data, and eventually generate new headlines that (somewhat) match the language of the New York Times. It's pretty amazing to see the network learn English from scratch in a few hours of training.</p>
<pre><code>The Dutch Polders by Bike and Schooner The Royals Take the Title ‘The Affair’ Season 2 Episode 5: Never Read the Book ‘The Walking Dead’ Season 6, Episode 4 Recap: The Making of Morgan &#8216;Homeland&#8217; Recap, Season 5, Episode 5: Can Carrie Figure Out What&#8217;s Going On With Allison? Long Lines for Story Time The Best Moments in College Football This Week Dangers for the Unwary Q. and A.: Chan Koonchung on Imagining a Non-Communist China Report on Bella Vista Health Center Inside the Trial of Sheldon Silver Jeb Bush Says He Was Unaware of Rubio PowerPoint Deck
</code></pre>
<p>This sort of automated writing is already <a href="https://www.wired.com/2017/02/robots-wrote-this-story/">widely used at mainstram outlets</a>. The <a href="https://www.washingtonpost.com/pr/wp/2017/09/01/the-washington-post-leverages-heliograf-to-cover-high-school-football/">Washington Post</a> seems to be leading the pack.</p>
<h2 id="april-16-tutorial-notes">April 16 Tutorial Notes</h2>
<h3 id="newspaper-clippings">Newspaper Clippings</h3>
<p><img src="https://maxkohler.com/assets/ml/clippings.jpg" alt="Fake Letterpress Newspaper Clippings" /></p>
<ul>
<li>Letterpress newspaper clippings are succsesful, maybe make a website to provide context. This could explain the dataset that was used, link to the relevant papers etc.</li>
<li>This website could also have a live version of the trained model, let people create and print their own snippets (Would be good way to learn how to deploy a model). This could also be used in the show - have it generate and print loads of clippings in real time (but not on receipt printer because everyone's doing that).</li>
<li>Figure out what it is the project talks about - Automation in journalism? Exploring one archive through generating another archive?</li>
</ul>
<h3 id="large-scale-drawing-machine">Large Scale Drawing Machine</h3>
<p>Continues to be a health and safety nightmare.</p>
<h3 id="machine-learning-dataset-book">Machine Learning Dataset Book</h3>
<p><img src="https://maxkohler.com/assets/ml/book-1.png" alt="ML Book Spread" />
<img src="https://maxkohler.com/assets/ml/book-2.png" alt="ML Book Spread" /></p>
<ul>
<li>I've narrowed the focus of this - rather than having three parts (Models, Data and Processing Power), it will just focus on the data that is used to train models.</li>
<li>The format I went for initially is way too big (should've printed some spreads earlier).</li>
<li>Tracey suggests that book editing is a bit like film editing - start with a rough cut that just gets all the content in order, then go back and refine the typography etc.</li>
<li>The book can have different layers of information that the reader can go through in different ways (I'm thinking the essay, the data visualisations, the descriptions of each dataset, and a seperate essay on the principles of machine learning would be layers like this). You signal these layers to the reader by using different paper stocks, different paper sizes etc.</li>
<li>Definitely coated for the images and uncoated for the text sections.</li>
</ul>
Documenting web design work2018-07-08T10:00:00Zhttps://maxkohler.com/posts/2018-07-08-showing-web-design/<h2 id="let's-look-at-some-options">Let's look at some options</h2>
<p>I'll list them roughly in order of elaborate-ness.</p>
<h3 id="linking-to-the-live-site">Linking to the live site</h3>
<p>This requires the least amount of effort on your side, and arguably it gives people the most accurate impression of your work. They can click around the site, look at animations, and see your design work at different screen sizes by resizing their browser. They can even look at things like loading times and accessibility if they're so inclined.</p>
<p>The biggest potential downside is that you can only really link to one page — typically the homepage. If you spent weeks designing a beautiful article template, people looking through your portfolio might never see it. This becomes even more of an issue when you're working on an app. In that case most of your work probably lives behind some login system, which makes it difficult for people looking at your portfolio to access.</p>
<p>In any case, you should probably throw in a link to the live site even if you're using screenshots or some other presentation method. Should you set the link to open in a new tab? <a href="https://css-tricks.com/use-target_blank/">Probably not</a>.</p>
<h3 id="screenshots">Screenshots</h3>
<p>In my experience, screenshots are probably the most popular way to show web design work. They're easy to produce, you can have more than one to show different parts of a site, and they allow you to show stuff that's behind login systems.</p>
<p>There's all kinds of ways to do them:</p>
<h3 id="the-window-less-screenshot">The Window-less Screenshot</h3>
<p>![Window-less screenshot](/assets/showing-design/bond header.png)
<a href="http://bond.backerkit.com/">Bond Conference 2018 website</a> by Andy McMillan et al. Via <a href="https://fontsinuse.com/uses/19968/bond-conference-2018-website">Fonts in Use</a></p>
<p>I'd say this works best when the site has a coloured background that makes it stand out on your portfolio, like the one above. Then it's a nice, clean option with nothing there to distract from your work.</p>
<p>These also have the advantage of being pretty easy to produce. <a href="https://developers.google.com/web/updates/2017/04/devtools-release-notes#screenshots">Chrome</a> and <a href="https://support.mozilla.org/en-US/kb/firefox-screenshots">Firefox</a> both have built-in tools to take regular and full-page screenshots.</p>
<h3 id="the-minimal-windowed-screenshot">The Minimal Windowed Screenshot</h3>
<p><img src="https://maxkohler.com/assets/showing-design/Katya-de-Grunwald-2.png" alt="Windowed Screenshot" />
<a href="https://www.katyadegrunwald.com/">Katya de Grunwald</a> by <a href="https://www.studiothomas.co.uk/projects/katya-de-grunwald">Studio Thomas</a>. Via <a href="https://fontsinuse.com/uses/22031/katya-de-grunwald">Fonts in use</a></p>
<p>I can think of two ways to make these.</p>
<ol>
<li>Do the whole thing in Illustrator: Draw up a browser window, import a screenshot of the site and export the two together as a PNG.</li>
<li>Do it programatically. Draw the browser window in Illustrator as before, but this time export it as an SVG. Then write some front-end code that inserts all your screenshots into that browser window automatically. You could get pretty clever and have it respond to different-sized screenshots automatically. You could even automate the screenshot-taking itself through some kind of <a href="https://github.com/GoogleChrome/puppeteer">headless browser</a>, but that's probably taking things too far.</li>
</ol>
<h3 id="the-windowed-screenshot">The Windowed Screenshot</h3>
<p><img src="https://maxkohler.com/assets/showing-design/hattie-newman-home-alt1.png" alt="Windowed Screenshot" />
<a href="http://hattienewman.co.uk/">hattienewman.co.uk</a> by <a href="http://art-dept.com/">Art Department</a>/Hattie Newman. Via <a href="https://fontsinuse.com/uses/14187/hattie-newman-website">Fonts in Use</a></p>
<p>I like these because they give people a sense of <em>scale</em> of your work. With the browser interface acting as a reference point, it's easier to judge how big things are in proportion <sup class="footnote-ref"><a href="https://maxkohler.com/posts/2018-07-08-showing-web-design/#fn1" id="fnref1">1</a></sup>.</p>
<p>You definitely want to set your desktop background to a nice colour and make your browser window look as clean as possible. Hide the bookmarks bar, close all other tabs, and hide all plugin icons. It might even be worth using a different browser if it has a nicer-looking interface. Chrome and Safari seem to be the most popular, though I do see Edge every once in while.</p>
<p>Apparently the drop shadow in the example above is a <a href="https://www.macgasm.net/2011/05/23/disable-dropshadow-mac-os-window-screenshots/">native feature</a> on Mac. I can't really think of a Windows equivalent other than setting your desktop background to white and manually taking a screenshot that includes the drop shadow.</p>
<h3 id="the-screenshot-on-device">The Screenshot-on-device</h3>
<p><img src="https://maxkohler.com/assets/showing-design/phone.jpeg" alt="mockup" />
Betamatters by <a href="http://candychiu.com/">Chiu Candy</a>. Via <a href="https://fontsinuse.com/uses/22148/betamatters">Fonts in Use</a></p>
<p>If done tastefully, these can be very effective. The device establishes the scale of your work, just like the browser window in the previous example.</p>
<p>A popular variation of this is what I call <em>the triad</em>:</p>
<p><img src="https://maxkohler.com/assets/showing-design/triad.jpeg" alt="mockup" />
Sight Unseen OFFSITE by <a href="http://www.kokoromoi.com/">Kokoro & Moi</a>. Via <a href="https://fontsinuse.com/uses/8056/sight-unseen-offsite">Fonts in Use</a></p>
<p>I remember these being extremely popular when responsive design was new. Showing the site on a desktop, tablet and phone was a way to show off you were part of the movement. Responsive design is pretty much the default now, so I'm not sure it's that much of a selling point anymore.</p>
<p>The biggest issue with the screenshot-on-device might be how quickly devices become outdated. If your portfolio is full of iPhone 4 mockups, that's not the best look. Do you go through every couple of years and redo all your screenshots on modern devices? Is this worth automating?</p>
<p>As far as device images go, Facebook seems to have by far the <a href="https://facebook.design/devices">nicest ones</a>. I like the idea of looking beyond the typical Apple devices - maybe your work looks better on a Microsoft Surface? <sup class="footnote-ref"><a href="https://maxkohler.com/posts/2018-07-08-showing-web-design/#fn2" id="fnref2">2</a></sup></p>
<h3 id="the-screenshot-on-device-in-hand-in-coffeeshop">The screenshot-on-device-in-hand-in-coffeeshop</h3>
<p><img src="https://maxkohler.com/assets/showing-design/facebook.jpeg" alt="facebook photograph" />
Via <a href="https://medium.com/facebook-design/evolving-the-facebook-news-feed-to-serve-you-better-f844a5cb903d">Facebook Design</a></p>
<p>Facebook seems to do this a lot. I think it works for them, maybe because we're already so familiar with their interface we don't need to see it in detail? Most of the time though, these feel kind of cheesy to me. Who holds their phone up in front of their face like that?</p>
<p>I guess it all depends on the audience you're trying to address. If these full-on mockups seem right for you, there's all kinds of products that make it easy to generate them. <a href="https://mockuuups.studio/">This looks like one of the nicer ones</a>.</p>
<h2 id="video">Video</h2>
<p class="hasImage" markdown="1">
<video playsinline="" muted="" loop="" controls="" autoplay="" src="https://maxkohler.com/assets/showing-design/video.mp4"></video>
[AKU Collective](https://aku.co/). Via [Hover States](https://hoverstat.es/features/aku-collective)
</p>
<p>Videos are great. They can show off multi-screen flows, animations and fancy interactions all at once in a digestible way.</p>
<p>They're also surprisingly difficult to produce. First, you need the right recording software — I like the built-in game recorder on Windows (press Win-G and you're good to go). You probably also want to trim the recording and export it to the <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Supported_media_formats">right format</a>, so you need software for that.</p>
<p>Once you've worked out your software situation, you need to do the actual recording. The last time I did this, it was much harder than I thought. You want to move your mouse around calmly, scroll up and down slowly and pause at the right moments to give people a chance to take in the work — pretty much the opposite of how I'd normally interact with a website.</p>
<p>I found it helpful to write down a little script to remind me what I was doing: <em>Click on the about page, scroll down, interact with the slider, open the menu</em> etc. Even so it took me multiple attempts to get a decent result.</p>
<h2 id="embedding-the-site-as-an-iframe">Embedding the site as an iFrame</h2>
<p>This seems to have a lot of upsides: You don't have to mess with screenshots or recording software. It's interactive, and animations play in real time. You could even make the iFrame resizable to show off how your design works at different screen sizes.</p>
<p>On the downside: Since you're loading a whole website inside that iFrame, performance could be an issue. There's also the risk that the client makes some drastic change to their site. What if they bring on another designer and they start to make changes? It would be weird to have their work pop up on your portfolio.</p>
<p><a href="https://aku.co/projects/interior-architecture-symposium-sisu/">AKU Collective</a> are one of the few people I found doing this out in the wild.</p>
<h2 id="conclusion">Conclusion</h2>
<p>There's all kinds of ways to show web design work. Screenshots are probably a good default choice. I tend to gravitate towards the more minimal mockups, but your work might need a nice coffeeshop around it. We can probably stop doing <em>the triad</em> at this point. If you're doing a lot of animation work, videos or iFrames might be a good option to show that off.</p>
<p>In any case, make sure you link to the dang thing.</p>
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>in this vein, I also like the Dropbox idea of <a href="https://medium.com/dropbox-design/desktop-prototyping-a6004fb5598a">designing interfaces in a simluated desktop enivronment</a>. They're using this primarily as an internal design tool, but maybe it could be a way of presenting finished work, too? <a href="https://maxkohler.com/posts/2018-07-08-showing-web-design/#fnref1" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn2" class="footnote-item"><p>Facebook also has good photos of <a href="https://facebook.design/handskit.html#filters">phones in hands</a>. <a href="https://maxkohler.com/posts/2018-07-08-showing-web-design/#fnref2" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
Practical CSS Scroll Snapping2018-08-15T10:00:00Zhttps://maxkohler.com/posts/2018-08-15-scroll-snapping/<p>I wrote about <a href="https://maxkohler.com/posts/2018-08-15-scroll-snapping/">CSS Scroll Snapping</a> on <a href="https://css-tricks.com/">CSS-Tricks</a>. This is the first time I've written for a professional publication, which is exciting. It's also the first article of mine that went through a proper editorial process, and it's much better because of it. Processes are the best.</p>
<p><a href="https://css-tricks.com/practical-css-scroll-snapping/">Read the article here</a>.</p>
Neural Networks2018-09-23T10:00:00Zhttps://maxkohler.com/posts/2018-09-23-neural-networks/<p>I did experiment with some pre-built networks, such as <a href="https://github.com/carpedm20/DCGAN-tensorflow">DCGAN</a> by and <a href="https://github.com/sherjilozair/char-rnn-tensorflow">char-rnn</a>, but I didn't really undestand what was happening under the surface. More importantly, I couldn't modify these networks to change their outputs (apart from tweaking some hyperparameters).</p>
<p>The focus of the next few months is going to be to change that. As with <a href="https://maxkohler.com/2017/teaching-machines-to-draw/">earlier projects</a>, I will be taking notes here as I go along.</p>
<h2 id="ideas-for-a-workshop">Ideas for a workshop</h2>
<p>Sheena suggested I might run a workshop with <a href="http://interpolate.org.uk/">Interpolate</a> on the subject of machine learning. This comes after a discussion at <a href="http://awesomephant.github.io/notes/2018/10/19/feminist-alexa.html">Designing a Feminist Alexa</a>. We agreed that there seems to be an awful lot of magical thinking among humanities-folk about what machine learning is. To me, it often feels like these debates are floating in thin air. Arguments seem to be based more on Black Mirror episodes and Ted Talks than the papers where the working mechanisms of neural networks are being developed.</p>
<p>The goal of the workshop would be to dispel some of those myths, and give people some low-level understanding of what they're talking about. I'm proposing to do this by having a group of people train a neural network by hand — using nothing but pen, paper, and maybe a basic calculator. Over the course of a few hours, we'd build up a network using string and index cards pinned to the wall. Once the network is sufficiently trained, we'd use it to generate some kind of outcome.</p>
<p>I'm imagining it like the scene in <a href="https://images-assets.nasa.gov/image/s70-34986/s70-34986~orig.jpg">Apollo 13</a> where engineers in mission control are working out orbital mechanics using <a href="https://en.wikipedia.org/wiki/Slide_rule">slide rules</a>.</p>
<p>The main question is: <em>What kind of task do we train this network on?</em> It has to be simple enough to be accomplished by a small network with limited computing power, yet complex enough to keep people interested.</p>
<ul>
<li><strong>MNIST Digit recognition</strong> would be possible from a technical standpoint. Classification might be a bit dry, but maybe an opportunity to talk about social/political issues.</li>
<li><strong>Image generation</strong> would be great for outcomes, but hard to achieve given the recources. It could work if the images were extremely low resolution? But then teh results might be hard to distinguish from random pixels.</li>
<li><strong>Text generation</strong>. A recurrant net would probably be hard to convey. Outcomes could be useful though.</li>
</ul>
<h2 id="november-22%2C-2018%3A-some-more-specific-workshop-ideas">November 22, 2018: Some more specific workshop ideas</h2>
<ul>
<li>The dataset will be a combination of images from the FERET database (scaled down to 12 × 18 = 216 pixels) and random images from CIFAR-10 cropped/scaled to the same size.</li>
<li>We'll be training a linear classifier that distinguishes between two classes: <code>face</code> and <code>no face</code>. (I think it's easy enough to make the connection from that to a full neural network) using SVM loss and vanilla gradient descent.</li>
</ul>
<p>The learning algorithm is taken straight out of Rosenblatt (1958):</p>
<p>$$w_{i,j}^{\text{next step}} = w_{i,j} + \eta (y_i - \hat{y}_j)x_i$$</p>
<p>Once the model is trained, it can be evaluated:</p>
<ul>
<li>Accuracy/Recall</li>
<li>Can we draw images that will trick the model?</li>
<li>What can we say about the data that we fed into the model? Where does it come from?</li>
<li>What about the idea that we created two distinct categories?</li>
<li>What about the idea of processing power (how centralised is it)?</li>
<li>Can we visualise the weight matrix (turn it back into a two-dimensional image)</li>
</ul>
<h2 id="monday%2C-december-17">Monday, December 17</h2>
<p>Notes from running the workshop myself:</p>
<ul>
<li>Cutting up the test image into strips leads to a good visual artefact</li>
<li>It might be better to present the images in black and white to avoid connfusion when converting them to decimals.</li>
<li>Not sure if inputs need to be normalised</li>
<li>Random numbers are necessary to initialise the weight matrix, maybe bring a printed list not unlike the RAND corporation book of 1,000,000 random numbers.</li>
<li>What happens if the initalisation happens to give correct predictions? Might be better to cherrypick initial values for $$W$$ so a smooth training process can happen.</li>
<li>Probably best to print all the graph paper in advance</li>
<li>Also design it such that the pixels and the coumns are lined up, to make it easier to find which numbers to multiply. This will be nice as it results in one big drawing / document at the end.</li>
<li>All numbers in the $$W$$ and $$X$$ need to have one decimal only, so they can be multiplied without using a calculator.</li>
</ul>
<h2 id="things-that-my-go-in-the-reader">Things that my go in the reader</h2>
<ul>
<li>Image classification models are easily tricked</li>
<li>A chapter from <em>Classification and its Consequences</em></li>
<li>Rosenblatt (1954)</li>
<li>Something on the biological neuron</li>
<li>Paglen: Your images are looking at you</li>
<li>A few examples of contemporary stories about facial recognition going wrong</li>
<li>FERET collection method</li>
<li>Hito Steyerl: Pattern (Mis-)Recognition</li>
<li>Facebook paper on face recognition</li>
<li>Maybe the paper about the PhD student doing the ImageNet challenge by hand</li>
<li>Something about mechanical turk - human labour of cleaning datasets is outsourced into poor countries (perhaps collect examples from papers)</li>
</ul>
Visual Forensics2018-10-18T10:00:00Zhttps://maxkohler.com/posts/2018-10-18-visual-forensics/<h2 id="reading">Reading</h2>
<ul class="contains-task-list">
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> David Batchelor: <em><a href="https://approachestopainting.files.wordpress.com/2013/01/163577202-chromophobia-david-batchelor.pdf">Chromophobia</a></em></li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> William Gass: <em><a href="https://www.scribd.com/document/254987042/On-Being-Blue">On Being Blue</a></em></li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> Tim Ingold: <em><a href="https://taskscape.files.wordpress.com/2011/03/lines-a-brief-history.pdf">Lines: A Brief History</a></em></li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> Petra Lange-Berndt: <em>Materiality</em></li>
<li class="task-list-item"><input class="task-list-item-checkbox" disabled="" type="checkbox" /> Catherine Malabou: <em>Ontology of the Accident: An Essay on Destructive Plasticity</em></li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> Maria Fusco: <em><a href="https://vimeo.com/142818895">Master Rock</a></em></li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> Charles and Ray Eames (1977): <em>Powers of Ten</em></li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> <em><a href="https://www.youtube.com/watch?v=kAN6zJKlAHI">The Art of Japanese Life: Nature</a></em></li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> William Kentridge: <em>Six Drawing Lessons</em></li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> Hito Steyerl: <em><a href="https://www.e-flux.com/journal/10/61362/in-defense-of-the-poor-image/">In Defense of the Poor Image</a></em></li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> <em>Strange Days: Memories of the Future</em> at 180, The Strand</li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> <em>Surreal Science</em> at Whitechapel Gallery</li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> Christian Marclay: <em>The Clock</em> (Tate Modern)</li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> Lindsay Seers: <em><a href="https://www.tate.org.uk/art/artists/lindsay-seers-12601/lindsay-seers-i-turned-myself-camera">I am a Viewfinder</a></em></li>
</ul>
<p><a href="https://photos.app.goo.gl/Wb7qeVUeRGfywSTw9">Collected images</a></p>
<h2 id="october-12%2C-2018">October 12, 2018</h2>
<h3 id="reading-1">Reading</h3>
<ul class="contains-task-list">
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> Lindsay Seers: <em><a href="https://www.tate.org.uk/art/artists/lindsay-seers-12601/lindsay-seers-i-turned-myself-camera">I am a Viewfinder</a></em></li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> Charles and Ray Eames (1977): <em><a href="https://www.youtube.com/watch?v=0fKBhvDjuy0">Powers of Ten</a></em></li>
</ul>
<p>The first task is to document <code>51°30’37.6”N 0°06’56.3”W</code>, the true geographical centre of London. Using a USB Microscope, a digital camera and graph paper, I document a 1' × 1' square on the embankment.</p>
<p><img src="https://maxkohler.com/assets/vf/site-overview-colour.jpg" alt="Microscope 1" /></p>
<p><img src="https://maxkohler.com/assets/vf/micro-1.jpg" alt="Microscope 1" /></p>
<p><img src="https://maxkohler.com/assets/vf/walk-0.png" alt="Walk 1" /></p>
<h2 id="october-16%2C-2018">October 16, 2018</h2>
<p>Encouraged by Friday's results, I go back to the embankment to take more microscope photographs. If I take many images of the same area, I can stitch them together in Photoshop and generate a much higher-resolution image. After a few hours, I have taken 2,764 images.</p>
<p>After about 20 hours of arranging these images (testing the limits of both Photoshop and my computer), I end up with a number of collages like these:</p>
<p><img src="https://maxkohler.com/assets/vf/walk-1.jpg" alt="Walk 1" />
<img src="https://maxkohler.com/assets/vf/walk-2.png" alt="Walk 1" />
<img src="https://maxkohler.com/assets/vf/walk-3.png" alt="Walk 1" /></p>
<p>When I took these images, I was doing my best to move the camera in straight lines across an area. The collages show how difficult this is to do - the lines meander from left to right, the images are rotated by varying amounts, covering some parts of the surface repeatedly while leaving others blank.</p>
<p>I'm also finding some other artifacts related to lichen. There's a database of every recorded lichen in London, with all kinds of rich metadata attached:</p>
<div class="table-container">
<table class="dense">
<thead>
<tr>
<td>gbifID</td>
<td>Dataset Key</td>
<td>Occurrence ID</td>
<td>Scientific Name</td>
<td>Country</td>
<td>Locality</td>
<td>Latitude</td>
<td>Longitude</td>
<td>Identified By</td>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr><td>...</td></tr>
</tbody>
</table>
</div>
<p>Source: GBIF.org (15 October 2018): <em>GBIF Occurrence Download</em> <a href="https://doi.org/10.15468/dl.f1lmko">DOI: 10.15468/dl.f1lmko</a></p>
<p>The section of database (limited to London) I downloaded has 2,548 entires. To save space, various columns are omitted in the example above.</p>
<p>Also, Matt points out that the lichen in my images are forming <em>Turing Patterns</em>. Turing patterns are a mathematical concept that describes how patterns found in nature (such as stripes, spots, growth patterns) can be described using <em>reaction-diffusion</em> (which is a mathematical model that describes the mixing of chemicals).</p>
<p><img src="https://maxkohler.com/assets/vf/TuringPattern-2.png" alt="Turing Patterns" />
Dolnik, Milos & M. Zhabotinsky, Anatol & Epstein, Irving. (2001): <em>Resonant suppression of Turing patterns by periodic illumination</em>. Physical review. E, Statistical, nonlinear, and soft matter physics. 63. 026101. <a href="https://www.researchgate.net/publication/12025147_Resonant_suppression_of_Turing_patterns_by_periodic_illumination">10.1103/PhysRevE.63.026101</a>.</p>
<p><img src="https://maxkohler.com/assets/vf/turing2.png" alt="Turing Patterns" />
Gray-Scott Reaction-Diffusion. <a href="https://github.com/pmneila/jsexp">Github</a></p>
<p>See also: Alan Turing (1952): <em><a href="http://www.dna.caltech.edu/courses/cs191/paperscs191/turing.pdf">The Chemical Basis of Morphogenesis</a></em></p>
<h2 id="october-19%2C-2018">October 19, 2018</h2>
<h3 id="catrin-morgan-on-visual-essays">Catrin Morgan on Visual Essays</h3>
<p>A definition of illustration: Any image that takes a communicating role in a text. Text meaning words or images or both. Catrin did a lot of work on different depictions of <a href="https://maxkohler.com/posts/2018-10-18-visual-forensics/">St. Jerome in his Study</a>, which is <a href="https://wsworkshop.org/2018/03/catrin-morgan/">detailed here</a>.</p>
<p><img src="https://maxkohler.com/assets/vf/jerome.jpg" alt="St. Jerome in his study" />
Vincenzo Catena (c.a 1510): <em>St. Jerome in his Study</em> <a href="https://www.nationalgallery.org.uk/paintings/vincenzo-catena-saint-jerome-in-his-study">The National Gallery</a></p>
<h3 id="constructing-an-argument-with-images">Constructing an argument with images</h3>
<p>James Elkins writes about this (at length) in <a href="http://writingwithimages.com/">Writing with images (2013)</a></p>
<p>Illustrations that are used in art history writing. A mnemonic: An image to remind us what a painting looks like. Doesn't have to be a very good reproduction. The other way is evidence — <em>here's proof that this painting really exists</em>. Both of these aren't very exciting.</p>
<h3 id="images-arranged-on-a-timeline%3A-a-visual-argument">Images arranged on a timeline: a visual argument</h3>
<p>You start seeing images referencing each other (chronology)</p>
<h3 id="details-of-different-images-next-to-each-other**">Details of different images next to each other**</h3>
<p>You start seeing repetition of elements etc.</p>
<p>David Carrier (2000): <em><a href="https://www.amazon.co.uk/Aesthetics-Comics-David-Carrier/dp/0271021888">The Aesthetics of Comics</a></em></p>
<p>Talks about the idea of concatenation of images: Whenever we see images in the same context we start making connections. The visual essay can build on this.</p>
<p>See also: Various versions of <em>Cardinal Albrecht as St Jerome</em> by Lucas Cranach the Elder.</p>
<h3 id="introducing-a-new-(diagrammatic)-voice-to-make-an-argument">Introducing a new (diagrammatic) voice to make an argument</h3>
<p>Semantic Sesseation: When you say a word over and over again and it stops soundning like a word. The same works for images: Image becomes a pattern. Graphic novels have this convention where a full bleed image is timeless. See also Scott McCloud (1993): <em>Understanding Comics</em>.</p>
<p>We can use devices like this (from graphic novels) to make critical arguments.</p>
<p>Using book rythm, pacing etc. If the reader thinks they know what's coming, you can emphasise a point by making something different. All of this is repetition.</p>
<h3 id="space">Space</h3>
<p>In graphic novels, white space slows down time (see McCloud). We're talking about space in a spatial medium.</p>
<p>Spacing also creates hierarchy. You can do headline, body copy and sidenotes with images. Using typographic conventions with images. All of the structure of an essay is still there, you can use it with images as well. However, be aware that this hinges on reading direction.</p>
<h3 id="extraction">Extraction</h3>
<p>Pulling out parts of the image (through tracing, distortion, etc.) to make a point.</p>
<h3 id="direct-comparison">Direct comparison</h3>
<p>Brian Dyllan (2017): <em><a href="https://www.nytimes.com/2018/09/18/books/review-essayism-brian-dillon.html">Essayism</a></em></p>
<p>Talks about ways to write essays: One of these is the <em>list</em> (i.e most of the stuff above). Visual essays allow us to demonstrate arguments rather than describing them.</p>
<p>Overlaying images that have visual similarities.</p>
<h3 id="historical-comparison">Historical Comparison</h3>
<p>Pull in images from a different context to make comparisons</p>
<h3 id="reduction%2C-elimination">Reduction, Elimination</h3>
<p>Cut out parts of the image. You draw attention to what's been cut out, and also the stuff around it.</p>
<h2 id="part-2%3A-visual-grammar">Part 2: Visual Grammar</h2>
<p><img src="https://maxkohler.com/assets/vf/mexico-night.jpg" alt="Midnight Mexico City" />
Sarah Sze (2015): <em>Midnight Mexico City</em>. Silkscreen, Digital Print, and Laser Engraved Paper, 58.1 × 63.2cm. <a href="https://www.artsy.net/artwork/sarah-sze-midnight-mexico-city">Artsy</a></p>
<p>Using found structure (such as the grid of a newspaper)
John Berger: Ways of Seeing</p>
<p><img src="https://maxkohler.com/assets/vf/mcguire.jpg" alt="McGuire" />
Richard McGuire (1989): <em>Here</em>. <a href="https://www.theatlantic.com/entertainment/archive/2014/09/richard-mcguires-time-machine-with-a-view/380736/">The Atlantic</a></p>
<p>Richard McGuire is looking at the same corner of a room in different time periods. Using time, layering of different narratives.</p>
<p>Hollis Frampton (1971): <em><a href="https://www.youtube.com/watch?v=voMDL1TgTh4">Nostalgia</a></em>. Frampton shows early photographs of himself burning on a hot plate, while a narrator talks about the previous image (which just got burned).</p>
<p>A Spiegelman comic in early in raw magazine plays on a similar idea, with character talking about what happened in the last panel:</p>
<p><img src="https://maxkohler.com/assets/vf/spiegelman.gif" alt="Spiegelman" />
Art Spiegelman (1973): <em>Don’t Get Around Much Anymore</em>. <a href="https://slate.com/culture/2011/10/art-spiegelman-before-maus.html">Slate</a></p>
<p>Rachel Moore (2006): <em><a href="https://mitpress.mit.edu/books/hollis-frampton">Hollis Frampton (nostalgia)</a></em></p>
<p><a href="https://www.instagram.com/cindysherman/?hl=en">Cindy Sherman's Instagram</a> is a visual essay in some way (even if maybe unconsciously). She's one of the rare old-school artist who's doing good things on Instagram.</p>
<p>Using digital platforms to inform visual essays can be useful. The Instagram grid is a visual grammar you can make use of.</p>
<p>Visual Essays don't have to happen in a book — see <em>Nostalgia</em>.</p>
<h2 id="tutorial-notes">Tutorial notes</h2>
<ul>
<li>Connect the lichen database to the images</li>
<li>Use the methods used to collect the images on more artifacts (such as the printed-out database)</li>
<li>Bring in coordinates, establish scale of the microscopic photographs</li>
<li>Think about how images might be presented in space</li>
<li>The interesting thing is how a seemingly dead, neutral surface (such as a wall in a city) is actually fiercly negotiated between all these different species of animals, systems, weather, air pollution etc.)</li>
<li>See also the essay on decay from the reading list - the wall isn't a static object, it's constantly in a state of decay, beign turned back into raw material where it came from. Glass turns back into chrystal, bread starts to rot as soon as it leaves the oven.</li>
<li>We tend to think of nature as contained in parks, but it's actually all around us, living inside our built structures</li>
<li>The British Lichen Society might be an interesting research area, who are these people making and recording these observations</li>
<li>Definitely match up the site I recorded with the correct line in the database - attacking the site from different angles</li>
<li>All of this is very reminiscent of Powers of 10</li>
</ul>
<h2 id="october-26%2C-2018">October 26, 2018</h2>
<h3 id="reading-for-week-three">Reading for week three</h3>
<ul class="contains-task-list">
<li class="task-list-item"><input class="task-list-item-checkbox" disabled="" type="checkbox" /> Georges Perec: <em>Species of Spaces</em></li>
<li class="task-list-item"><input class="task-list-item-checkbox" disabled="" type="checkbox" /> John Berger: <em>Ways of Seeing</em></li>
<li class="task-list-item"><input class="task-list-item-checkbox" disabled="" type="checkbox" /> Brian Dillon: <em>Essayism</em></li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> Scott McCloud: <em>Understanding Comics</em></li>
</ul>
<blockquote>
<p>There is no specified format and we ask you to think carefully about appropriate outcomes for your visual investigations. This might extend to book, wall based, object based and projection based work.</p>
</blockquote>
<h3 id="outcomes-and-crit-notes">Outcomes and Crit Notes</h3>
<p>I talked about the way the mosaic photographs were made — moving the USB microscope along the surface millimetre by millimetre (within a 1' by 1' square I defined). The movement was then repeated in Photoshop when I stitched the images together. The final collages were laser-printed at a large format for the crit.</p>
<p>I did secondary research on lichen in a number of different directions — there's the <a href="http://www.britishlichensociety.org.uk/">British Lichen Society</a>, and also the <a href="https://maxkohler.com/posts/2018-10-18-visual-forensics/#october-16-2018">database of lichen sightings</a> which they contribute to. I mapped a section of the database onto a map of London, but unfortunately the sightings don't all have individual coordinates associated with them. Instead they're all grouped into maybe a dozen sets of coordinates which I'm sure has a good reason, but doesn't make for a very nice visualization.</p>
<p>The Natural History Museum has one of the <a href="http://www.nhm.ac.uk/our-science/collections/botany-collections/lichen-collections.html">largest collections of lichen specimens</a> in the world, containing about 400.000 items. They're beautiful:</p>
<p><img src="https://maxkohler.com/assets/vf/nhm.jpg" alt="Specimen of Lichen at the Natural History Museum, London" />
Specimen of Lecanora vitellina var. reflexa Nyl. (BM001096649) <a href="http://data.nhm.ac.uk/object/a569c613-34a7-43d1-a49e-04ec2b70d3ef">The Natural History Museum</a></p>
<p>I'm also still interested in diagramming the embankment in different ways, maybe developing this drawing I made at the site:</p>
<p><img src="https://maxkohler.com/assets/vf/site-diagram.jpg" alt="Diagram of the site" /></p>
<p>Eventually, I decided to focus on the Turing Patterns. I wrote a <a href="https://codepen.io/maxakohler/pen/JmwayP/">Javascript implementation</a> of the <a href="http://www.karlsims.com/rd.html">Gray-Scott algorithm</a> so I could control everything about the simulation. After experimenting for a while, I found a few sets of parameters that led to patterns that matched the lichen photographs very closely. I then wrote a <a href="https://github.com/GoogleChrome/puppeteer">Puppeteer</a> script that takes a screenshot of the simulation every few seconds. Using the script, I generated a series of images using different parameters. I then printed and bound these into a book in chronological order:</p>
<p>The book is designed to <a href="https://maxkohler.com/posts/2018-10-18-visual-forensics/#introducing-a-new-diagrammatic-voice-to-make-an-argument">add a secondary voice</a> to the argument. There is also the idea that the book maps a single environment as it changes over time (while the photographic collages move across the surface spatially). I'm imagining it like this:</p>
<p><img src="https://maxkohler.com/assets/vf/space-diagram.svg" alt="Diagram showing images along spatial and temporal dimensions" />
<strong>A</strong>: Photographic collage, <strong>B</strong>: Turing-Pattern book.</p>
<h2 id="november-9%2C-2018%3A-materiality">November 9, 2018: Materiality</h2>
<h3 id="reading-for-november-9">Reading for November 9</h3>
<ul class="contains-task-list">
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> Peter Fischli & David Weiss: <em><a href="https://www.youtube.com/watch?v=GXrRC3pfLnE">The Way Things Go</a></em> (3 mins)</li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> Sarah Sze: <em><a href="http://channel.louisiana.dk/video/sarah-sze-meaning-between-things">Meaning between Things</a></em> (12 Mins)</li>
</ul>
<h3 id="brief-for-novemeber-9">Brief for Novemeber 9</h3>
<blockquote>
<p>Hi All, It was great to see your responses to ‘Image’ in last week’s review session and to share ideas, thoughts and comments within your groups. During Across RCA, please complete the primer task attached. This will inform the next session, a workshop to begin our exploration of 'Materiality'. We introduced some background ideas at the end of our last session and here are the reference links if anyone would like to watch the short films again;</p>
</blockquote>
<p><a href="https://drive.google.com/file/d/1eGyu4c5s6UvY7fuSf8tR9dh8h3U4vYxN/view">Materiality Brief (DOCX)</a></p>
<p>The brief is to find a <em>Process</em>, a <em>Material</em>, an <em>Object</em> and a <em>Tool</em> and write a few words about each. It also suggests to think of these as seperate things, so that's what I'm doing below:</p>
<h3 id="process%3A-tuning-an-instrument">Process: Tuning an Instrument</h3>
<p>I have personal experience here of course. I remember learning to tune your own violin to be a pretty big step, and it took me forever to learn how to do it. Part of it is hearing: First, you tune your A string to whatever reference you're using — another violin, a tuning fork or a piano. Then, you adjust each string to the A string - first the D, which is a fith below A. Then G, a fith below D. Finally E, a fith above A. There's also a muscle memory component — how do you hold your hand on the pegs to achieve the right amount of leverage? Unlike guitar tuners, these are just tapered pieces of wood stuck into a hole.</p>
<p>In orchestra performances, every player goes through this process at the beginning of a performance. It always happens the same way: Orchestra walks on, Applause, Orchestra sits down, Concert Master walks on, Applause, Concert Master turns around — this is the command to tune. It always happens in the same order: Wind, Brass, Strings from low to high. Finally, conductor (and soloist) walk on.</p>
<h3 id="material%3A-the-enron-corpus">Material: The Enron Corpus</h3>
<p><a href="https://www.newyorker.com/magazine/2017/07/24/what-the-enron-e-mails-say-about-us">What the Enron Corpus Says About Us</a>
I originally found this during by undergrad <a href="http://awesomephant.github.io/2018/feret-database/#febuary-18-2018">at Camberwell</a>, but didn't really do anything with it at the time.</p>
<p>I consider databases like this one materials because they're <em>the stuff algortihms</em> are made from. By themselves, there basically useless — many contain many more instances than you could ever look at in a lifetime, and they're usually pretty monotenous. They only become meaningful when they're turned <em>into</em> something.</p>
<ul>
<li><strong>Object</strong>: Styrofoam Head</li>
<li><strong>Tool</strong>: Kitchen Blender (Braun)</li>
</ul>
<p>We spent the morning handling the material and arranging it by various criteria:</p>
<ul>
<li>Density, as in $$p = \frac{m}{V}$$</li>
<li>Density, as in <a href="https://en.wikipedia.org/wiki/Opacity_(optics)">opacity</a>: $$I(x)=I_{0}e^{-\kappa p x}$$</li>
<li>Density, as in consitency</li>
<li>Monetary value (how much would these be to buy?)</li>
<li>Raw material value (if you melted these down, how much would they be worth?)</li>
<li>Sentimental value (here we can only go by assumption — if your dad lost a leg in the Ibuprofen factory, that's going to have sentimental value for you)</li>
<li>Photographic value (how much light does it reflect?)</li>
</ul>
<p><img src="https://maxkohler.com/assets/vf-table-1.jpg" alt="VF Table" />
<img src="https://maxkohler.com/assets/vf-table-2.jpg" alt="VF Table" />
<img src="https://maxkohler.com/assets/vf-table-3.jpg" alt="VF Table" /></p>
<p>The task for the next session is to generate a visual outcome on <em>an aspect of materiality in time/space</em>.</p>
<h2 id="november-10%2C-2018">November 10, 2018</h2>
<p class="full hasImage" markdown="1">
![Maths notes](/assets/cabinet_053_hunt_katherine_002.jpg)
“The strange operation & mistery of numbers.” Peter’s Mundy’s notation of all possible changes on three, four, five, and six bells in his travel journal “Itinerarium Mundi.” Entry made after Mundy’s visit to London in 1654. [Cabinet Magazine](http://cabinetmagazine.org/issues/53/hunt.php)
</p>
<h2 id="notes-for-friday%2C-febuary-8th">Notes for Friday, Febuary 8th</h2>
<blockquote>
<p>Reflect on all work produced as part of VISUAL FORENSICS through IMAGE, MATERIALITY, COLOUR, LANGUAGE. Include (physical) examples of all work. What worked? What was a surprise? What did you uncover? Did this constitute a step forward in your working methods? What connections have you observed in your approach to the module? Have any threads emerged?</p>
</blockquote>
<ul>
<li>I think whenever I managed to come up with some (obsessive) process, it led to good outcomes: The lichen tile photographs, the New York Times book, and to some extent the time video (although with that project there's less obsessive manual collecting of data. But still, there's a process — training a DNN to predict video frames — that's repeated with different inputs, parameters etc. to create the final piece)</li>
<li>Spreadsheets are generally a good idea</li>
<li>I think the main step forward here was to work with images in a more fluid way - treating them as raw material in different ways.</li>
<li>I kind of enjoyed working out responses to different briefs on different subject material — apart from the ML piece, the subjects of these pieces don't really relate to my overarching research interest. But I think that's okay - they relate in terms of process.</li>
</ul>
<blockquote>
<p>What are the key-themes, aspects, processes and/or concepts informing your practice?</p>
</blockquote>
<ul>
<li>I think the overarching theme is clearly approaching these ontological questions with a scientific, algorithm-based method. I choose the word <em>algorithm</em> (raher than <em>coding</em>) because some of the stuff was done very manually: Photographing the lichen for hours and spending like three days screenshotting New York Times articles. But these processes are still algorithms in the sense that there's a clearly defined series of rules and decisions that I follow over and over again.</li>
<li>The archive is a recurring theme</li>
<li>Secondary research (the Turing paper, the NHM samples)</li>
</ul>
<blockquote>
<p>What have you been reading or looking at during this module?</p>
</blockquote>
<ul>
<li>I've been making these <a href="https://maxkohler.com/2019/gradient-drawings/">gradient drawings</a> based on the work of Xenakis, so I've been reading about harmonic series and old architecture</li>
<li>Maths: Statistics, Computer Vision, machine learning maths. Doing this Stanford course on machine vision.</li>
<li>Re-read some key text on the archive: Body and the Archive, Archive Fever</li>
<li>Some basic photography theory: Barthes, Elkins</li>
<li>Forensic architecture definitely a key reference</li>
</ul>
<blockquote>
<p>What is your relationship to the idea of 'FORENSICS'? How are you using, interrogating, understanding and developing this idea?</p>
</blockquote>
<ul>
<li>I think the word Forensics describes looking at the world in a very careful, methodical way (common to art and science). It's also (in my opinion) a way of looking that's somehow detached from personal feelings, instead trying to achive some level of "objectiveness" (but also realising that that's never fully possible).</li>
</ul>
<h2 id="friday-1-march%3A-is-this-tomorrow%3F">Friday 1 March: Is this Tomorrow?</h2>
<p><a href="https://www.whitechapelgallery.org/exhibitions/is-this-tomorrow/">Is this Tomorrow?</a> at the Whitechapel is a re-creation of the 1956 show called <em>This is tomorrow</em>. From what I understand, that one was full of post-war optimism (this is the time of the great social projects) while this one seems much darker.</p>
<h3 id="mono-office">mono office</h3>
<ul>
<li>Drawing attention to the exhibition infrastructure (by pointing arrows at it)</li>
<li>Very highly finished. Wooden frames with precisely sized prints maybe recall the 1956 show</li>
</ul>
<h3 id="6a-architects">6a architects</h3>
<ul>
<li>Agriculture equipment (pre-manufactured steel, cast plastic) vs gallery surroundings (wood panel floor, high ceilings, skylights). Probably the opposite environment to where you'd normally encounter an enclosure like this.</li>
<li>See also <em><a href="https://www.lars-mueller-publishers.com/handbook-tyranny">Handbook of Tyranny</a></em></li>
<li>Non-human architecture</li>
</ul>
<h3 id="numbers">Numbers</h3>
<ul>
<li>Show lacks some traditional gallery devices (unlike a place like the Tate, or the NHM, pieces here aren't numbered). Maybe a function of everything being freshly commissioned.</li>
<li>Some pieces contain numbers, like the sheep enclosure (printed serial number on steel parts) and the mono office piece (introduces its own internal labelling system)</li>
</ul>
<h3 id="marina-tabassum-architects">Marina Tabassum Architects</h3>
<ul>
<li>Another piece that interacts with the exhibition environment: You look up this stone-age type hole in the ceiling (although covered in sickly, candy-coloured spraypaint) but what you see isn't the sky, but smoke detectors, light fixtures, pipes.</li>
<li>Robot cavepaintings</li>
</ul>
<h3 id="salvador-mundi-experience">Salvador Mundi Experience</h3>
<ul>
<li>Probably not too far off what the actual Museum is going to be like.</li>
<li>Self-referential: The piece has a schematic drawing of itself bolted to the outside</li>
<li>Ambiguos whether we're looking at a piece or an architectural model of a piece.</li>
</ul>
<h3 id="black-barriers">Black barriers</h3>
<ul>
<li>More readymade architecture, not unlike the sheep enclosure (but for people). Apart from these being painted black, they're probably made from the same steel tubing.</li>
<li>Also something about the temporary nature of these structures - designed to be moved around, slotted in wherever people's movement needs to be restrained.</li>
<li>Together, they form this huge black mass. Motion-activated loudspeakers give the impression that the things are awake, collecting data, phoning home.</li>
</ul>
<h3 id="bio-reactor">Bio-Reactor</h3>
<ul>
<li>Not sure how the back-projected bird and the bioreactor/screen setup are connected.</li>
<li>Draws on scifi-aesthetic: Tubes, wires, random photos, all unlabelled, leading to a slightly cryptic screen at the front of the piece. Very unlike the <em>mono office</em> piece.</li>
</ul>
<h2 id="notes-on-the-canal-museum-show">Notes on the Canal Museum Show</h2>
<blockquote>
<p>Please prepare a digital presentation and bring any physical work that you have made so far. Your presentation should cover: Concepts, Process, How it relates to a 'forensic' approach, What you have made so far, Supporting material (including relevant work from past projects), Potential modes of presentation and display structures, How you want people to engage with the work, Proposed space required within the Canal Museum.</p>
</blockquote>
<p>We understand that the work might change and we are not expecting you to know exactly what the work will look like at this stage but this statement of intent will be really useful in helping develop the work in the coming months and in curating the exhibition/event.</p>
<p>The building used to be a storage facility for ice blocks imported from Norway.</p>
<ul>
<li><a href="http://www.canalmuseum.org.uk/ice/iceimport.htm">The canal museum on the ice trade</a></li>
<li>James Graham, Caitlin Blanchfield, Alissa Anderson, Jordan Carver, Jacob Moore (Editors) 2016: <em><a href="https://www.arch.columbia.edu/books/catalog/138-climates-architecture-and-the-planetary-imaginary">Climates: Architecture and the Planetary Imaginary</a></em></li>
<li>Pars Foundation (2007): <em><a href="https://www.parsfoundation.com/Findings-on-Ice">Findings on Ice</a></em></li>
</ul>
<p><img src="https://maxkohler.com/assets/vf/ice-melting.jpg" alt="The sound of ice melting" />
Paul Kos (1970), <em>Sound of Ice Melting</em>. Via <a href="http://blogs.discovermagazine.com/imageo/2013/03/27/unintended-art-of-the-anthropocene-the-sound-of-ice-melting/#.VgIQuI9Viko">Discover Magazine</a></p>
<p>This piece was originally made in response to the Vietnam War (The microphones recall a press conference), but now the immediate association is climate change / the anthroposcene. <a href="https://kadist.org/work/sound-of-ice-melting/">Kadist on the piece</a>.</p>
<p><img src="https://maxkohler.com/assets/vf/ice-core.jpg" alt="Ice core showing band of volcanic ash" />
Ice core from the <em><a href="http://www.waisdivide.unh.edu/">West Antarctic Ice Sheet</a></em> project. The dark band is a layer of volcanic ash that settled on the ice sheet approximately 21,000 years ago. Via the <a href="https://www.nsf.gov/news/news_images.jsp?cntn_id=134908">National Science Foundation</a></p>
<ul>
<li><a href="https://climate.nasa.gov/news/2616/core-questions-an-introduction-to-ice-cores/">NASA Introduction to Ice Cores</a></li>
<li>Wikipedia has a <a href="https://en.wikipedia.org/wiki/List_of_ice_cores">list of all the ice cores</a></li>
</ul>
<h3 id="to-do">To Do</h3>
<ul class="contains-task-list">
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> Collect training data (~1800 image pairs at one-hour interval should be enough)</li>
<li class="task-list-item"><input class="task-list-item-checkbox" disabled="" type="checkbox" /> Process ice images into A/B dataset for pix2pix, taking into account different shooting intervals</li>
<li class="task-list-item"><input class="task-list-item-checkbox" disabled="" type="checkbox" /> Train pix2pix model (probably for a few days)</li>
<li class="task-list-item"><input class="task-list-item-checkbox" disabled="" type="checkbox" /> Deploy the model and build an interface that powers the installation. This needs to take an image, make a prediction, display the result and add some details like timestamps and so on. Ideally web-based for future appolications.</li>
</ul>
<h2 id="show-setup">Show Setup</h2>
<ol>
<li>Plug webcam into USB extension and into computer</li>
<li>Frame using antive camera app</li>
<li>Close native camera app</li>
<li>In Anaconda prompt:
<ol>
<li><code>cd D:\Projects\visual-forensics\show-piece</code></li>
<li><code>node .</code></li>
</ol>
</li>
<li>Open localhost:3000 in Chrome</li>
<li>From browser command line: Run <code>startCycle()</code></li>
<li>Close devtools, F11 for fullscreen</li>
</ol>
<p>...</p>
<p>8: Kill server from command line, close browser</p>
Dissertation Notes2019-01-15T10:00:00Zhttps://maxkohler.com/posts/2019-01-15-dissertation-notes/<h2 id="proposal-(december-2018)">Proposal (December 2018)</h2>
<p>Provisional Title: The photographic image in the age of the neural network</p>
<h3 id="topic-(approx.-100-words)">Topic (approx. 100 words)</h3>
<p>How does our understanding of the photographic image change when most photographs are taken by machines for machines? What happens to the photograph when it becomes an instance (of millions) in a machine-learning dataset, and are these datasets a modern extension of the photographic archive? How do we respond to images produced by machines (ie. operational images and, more recently, deep fakes)?</p>
<h3 id="research">Research</h3>
<p>There’s a rich body of literature on our relationship with the digital photograph (Steyerl, Farocki, Paglen) and the archive (Foucault, Sekula, Derrida, and more recent writers on instances of digital archives like the Enron Corpus, Facebook profiles, and various archive-related efforts by Google).</p>
<p>I’m hoping to base this cultural analysis on readings of papers from the field of computer science, both historical (such as Rosenblatt 1958, which introduces the concept of the neural network) and contemporary (such as Taigman, Yang, Ranzato and Wolf (2014), which describes the first image-classification model with human-level performance (developed by Facebook) and Nguyen, Yosinski and Clune 2015, which details how such models can be fooled using artificial imagery).</p>
<h3 id="600-800-word-text">600-800 Word Text</h3>
<p>If you can imagine it, there is probably a dataset of it. The "Machine Learning Repository", which is maintained by the University of California, lists 426 datasets at the time of this writing, each consisting of between hundreds and tens of millions of instances. A set of anonymised records from the 1990 U.S. census (24 million instances) sits next to one consisting of 150 hours of Indian TV news broadcasts (12 million instances). The 371 choral works of J.S. Bach (in machine-readable form) can be found next to cases of breast cancer in Wisconsin (699 of them), forest fires (571) just below Facebook comments (40,949) (University of California, Irvine). If we narrow the search to datasets of images, we still get countless results. There is the Stanford Dogs dataset (20,580, 110 breeds), the German Traffic Sign Detection Benchmark Dataset (900), and dozens of datasets of human faces. Arranged in chronological order, the face datasets tell us about the shifting economic circumstances of database production.
The earliest face datasets are created by research groups directly engaged in facial recognition research, and predominantly feature whoever was walking around the laboratory at the time. The Yale Face Database (1997, 165 instances) and the Carnegie Mellon Face Images Dataset (1999, 640 instances) are examples of this. In the early 2000s, we start to see targeted efforts to generate face databases, now detached from researchers working on the algorithms themselves. The FERET database (2003, 11,338), which was funded by the U.S Defence Department is perhaps the most striking example of this. Though the number of instances has jumped by two orders of magnitude, the fundamental mode of production method hasn't changed from first phase of datasets: A professional photographer (hired for this purpose) is recording paid volunteers, for the sole purpose of creating material for the database. This relationship starts to change by 2010, when image databases are increasingly sourced from public sources on the internet using automated crawlers. This shift from original production to automated extraction of images allows the number of instances to increase by orders of magnitude again: FaceScrub (2014, 107,818) was compiled using Google's image search, IMDB-WIKI (2015, 523,051) and the Youtube Face Database (2012, ~600,000) bear their mining-grounds in their names.</p>
<p>The largest face dataset whose existence has been publicly acknowledged (at 4,000,000 instances) isn't even listed: it's Facebook's proprietary face dataset, which is not publicly available. By controlling the richest dataset, Facebook by extension controls the world's most powerful facial recognition algorithm (Taigman et al., 2014). How do we deal with the emergence of these vast datasets in cultural terms? It seems natural to place the database in the tradition of the photographic archive. But are they really the same? Sekula (1986) describes how the photographic archive of the 19th century serves to "define, regulate" and thus to control social deviance. The dataset certainly serves that function - one needn't look very hard to find countless examples of police, governments and corporations using automated image-making to sort people along scales of likely social compliance (Paglen, 2016) (Sekula, 1986). However, in some ways the database seems fundamentally different from the archive. First, the archive usually comes with an index (or catalogue) to help whoever is accessing the archive find any particular record (Berthod 2017).The dataset is essentially a flat list with no means of navigation other than sorting by filenames (which are often meaningless). This leads to the larger observation that in the database, the individual record is essentially meaningless. Only the accumulation of thousands, or millions of similar records make it useful - as Halevy et al (2009) show, the accuracy of an algorithm is directly linked to the quantity (not completeness, or even accuracy) of the training data (Steyerl, 2016). Secondly:</p>
<p>While both the archive and the dataset exert power, they do so in different ways. The archive controls primarily whoever is recorded in the archive (or conversely, whoever is left out). The dataset has no spatial or temporal limitations - a dataset of portraits collected in the Midwest in the 1990s might be used by a police computer on the other side of the globe, 30 years later. This is perhaps because the dataset, unlike the archive, is ultimately a means to an end: A raw, unrefined material from which algorithms might be forged. In this context the agricultural language surrounding the creation of archives and databases (as observed by Steyerl), seems to underline this point: The archive is curated, recorded, built-up, accumulated. Data is mined, harvested and crawled before truckloads of it are compressed, distributed and fed to the algorithm.</p>
<h3 id="bibliography">Bibliography</h3>
<ul>
<li>Susan Leigh Star (2000): <em>Sorting Things Out: Classification and Its Consequences</em> MIT Press</li>
<li>Hal Foster (2004): <em>An archival impulse</em>. Available from https://www.jstor.org/stable/3397555</li>
<li>Nguyen A, Yosinski J, Clune J (2015): <em>Deep neural networks are easily fooled: High confidence predictions for unrecognizable images</em>. Available from evolvingai.org/fooling</li>
<li>Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, Lior Wolf (2014): <em>DeepFace: Closing the Gap to Human-Level Performance in Face Verification</em>. Available from research.fb.com/publications/deepface-closing-the-gap-to-human-level-performance-in-face-verification/</li>
<li>Trevor Paglen (2016): <em>Invisible Images (Your Pictures Are Looking at You)</em>. Available from: thenewinquiry.com/invisible-images-your-pictures-are-looking-at-you/</li>
<li>Allan Sekula (1986): <em>The Body and the Archive</em>. Available from: www.jstor.org/stable/778312</li>
<li>Alon Halevy, Peter Norvig, and Fernando Pereira (2009): <em>The Unreasonable Effectiveness of Data</em> https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf</li>
<li>Hito Steyerl (2016): <em>A sea of Data: Apophenia and Pattern (Mis-)Recognition</em> https://www.e-flux.com/journal/72/60480/a-sea-of-data-apophenia-and-pattern-mis-recognition/</li>
<li>Rob Hornig (2018): <em>Plausible Disavowal</em>. Real Life Magazine. Available from: reallifemag.com/plausible-disavowal/
Charles Merewether (2006): The Archive: Documents of Contemporary Art. MIT Press</li>
</ul>
<h3 id="non-print-sources">Non-print sources</h3>
<p>Artists dealing with the operational image, databases etc:</p>
<ul>
<li>Farocki, H. (2001). Eye/Machine I. [Two-channel video installation re-edited to single-channel video (color, sound)] New York: Museum of Modern Art.</li>
<li>Paglen, T. (2017). It Began as a Military Experiment [C-type prints]. New York: Metro Pictures</li>
</ul>
<p>Various Image-Databases such as:</p>
<ul>
<li>National Institute for Standards and Technology (2003): Face Recognition Technology (FERET). [Database of digital images]. Available from https://www.nist.gov/programs-projects/face-recognition-technology-feret</li>
<li>Lior Wolf, Tal Hassner and Itay Maoz (2011): Youtube Faces Database [Database of digital videos]. Available from https://www.cs.tau.ac.il/~wolf/ytfaces/</li>
</ul>
<h2 id="tutorial-notes-january-15%2C-2019">Tutorial Notes January 15, 2019</h2>
<ul>
<li>The proposal touches on different aspects of the same ontological question about the nature of photography in the digital age.</li>
<li>It would be useful to start the text by establishing the existing discourse around photography (see James Elkins)</li>
<li>Subsequent chapters can then deal with the problem from various angles</li>
</ul>
<p>There is also the observation that thinking about how machines see the world forces us to question how we ourselves do it.</p>
<p>Chapters could be:</p>
<ul>
<li>Database vs the Archive as touched on in the proposal. Bertillion vs NIST.</li>
<li>The idea that the act of looking at photographs is increasingly outsourced to low-paid workers. Also the view of data-mining companies (ie. social media platforms) as extractive industries.</li>
<li>What happens when machines look at images (note the usage of the word <em>look</em> — not particularly accurate, but we don't have anything better.)</li>
<li>What happens when machines make images for us to look at (although the main reason generative networks are being developed is to extend training datasets for other machines)</li>
</ul>
<p><img src="https://maxkohler.com/assets/ml/train_01_0092.png" alt="GAN-generated images" />
GAN / Youtube Faces Dataset</p>
<p>The main methodoligical idea remains to do this not just through sources from the humanities, but also understand the mathematics, physics, economics and logistics of machine vision.</p>
<h2 id="january-16%2C-2018">January 16, 2018</h2>
<p>Possible chapters / angles:</p>
<h3 id="data-collection-(how-images-are-taken)">Data collection (how images are taken)</h3>
<p>(Through sensors etc, driven by economy, some images start life as representations and become data)
This would be where the reflection on the database vs the archive goes. Also a good place to do figures that update dynamically.</p>
<ul>
<li>Architecture</li>
<li>Code (this is maybe the least interesting layer)</li>
</ul>
<h3 id="mathematics-(network-architecture-and-history-thereof)">Mathematics (Network architecture and history thereof)</h3>
<p>Assumptions can be hard-coded into architecture. This section should probably include an explanation of how common network architectures work, also a good place for live demonstrations. Maybe a text-to-image model? Or a pix2pix trained on line drawings.</p>
<h3 id="infrastructure-(cables%2C-buildings%2C-chips)">Infrastructure (Cables, Buildings, Chips)</h3>
<p>Google (and also Facebook) are building these dedicated processing units:
<img src="https://maxkohler.com/assets/ml/tpu-2.png" alt="TPU " />
Third generation Cloud TPU <a href="https://cloud.google.com/blog/products/ai-machine-learning/what-makes-tpus-fine-tuned-for-deep-learning">Google</a></p>
<p><a href="https://www.wired.com/2017/05/google-rattles-tech-world-new-ai-chip/">Wired on the TPU</a></p>
<h3 id="labour-(generating-training-data%2C-reverse-turing-test)%3A-who-looks-at-the-images">Labour (generating training data, reverse Turing test): Who looks at the images</h3>
<p>There is this narrative that machine learning models spring from the minds of genius programmers. See for instance this <a href="https://www.fastcompany.com/90244767/see-the-shockingly-realistic-images-made-by-googles-new-ai">Fastcompany article</a>: It suggests a Google intern made this amazing model, when really the guy has a PhD and used thousands of pounds worth of computing power. And more broadly, the datasets, hardware, infrastructure etc. are propped up by much lower-skilled lablour.</p>
<p>A lot of papers use Amazon Mechanical Turk to validate results or generate training sets:</p>
<ul>
<li>The Atlantic: <a href="https://www.theatlantic.com/business/archive/2018/01/amazon-mechanical-turk/551192/">The Internet Is Enabling a New Kind of Poorly Paid Hell</a></li>
</ul>
<h3 id="machines-generating-images">Machines generating images</h3>
<p>Maybe talk about a specific model: Pix2pix would seem to be a good candidate.</p>
<h2 id="group-tutorial-notes">Group tutorial notes</h2>
<h3 id="keywords-(6-10)">Keywords (6-10)</h3>
<ol>
<li>Archive</li>
<li>Database</li>
<li>Computer Vision</li>
<li>Machine Learning</li>
<li>Reverse Turing Test</li>
<li>Digital Photography</li>
<li>Operational Images</li>
<li>Digital Economy</li>
</ol>
<h3 id="map">Map</h3>
<blockquote>
<p>a current ‘map’ of the dissertation (including key themes / writers / artists / examples)</p>
</blockquote>
<p>I think it's probably necessary to talk about real-world examples of computer vision having social consequences, but the ultimate goal is to get to a more fundamental question: How do we have to look at photography in the age of the database? Also, the age of the database has been going on for far longer than neural networks have been in the public consciousness.</p>
<div class="full" markdown="1">
![Flow](/assets/dissertation/flow.svg)
</div>
<h3 id="images">Images</h3>
<div class="gallery full" markdown="1">
![pix2pix](/assets/ml/pix2pix.png)
![pix2pix](/assets/ml/72-outputs.png)
![pix2pix face](/assets/ml/fb-pose.png)
![pix2pix face](/assets/ml/feret.jpg)
![pix2pix face](/assets/ml/perceptron.png)
![pix2pix face](/assets/ml/paglen-5b.jpg)
![pix2pix face](/assets/ml/andy.gif)
![NIST Mugshot](/assets/ml/mugshot.png)
![DGAN research image](/assets/ml/test-arrange-dgan.png)
![Eigenfaces](/assets/ml/eigen.png)
![pix2pix research image](/assets/ml/n1.png)
![Linear classification templates](/assets/ml/templates-2.jpg)
![Convnet activations](/assets/ml/conv-activations.png)
![Citizens of the 20th century](/assets/ml/sander.jpg)
</div>
<h3 id="key-texts">Key Texts</h3>
<ul>
<li>Barthes (1980): <em>Camera Lucida</em></li>
<li>Sekula (1984): <em>The body and the archive</em></li>
<li>Elkins (2011): <em>What photography is</em></li>
<li>Isola, Zhu, Zhou, Efros (2016): <em>Image-to-Image Translation with Conditional Adversarial Networks</em></li>
<li>Paglen (2016): <em>Invisible Images (Your Pictures Are Looking at You)</em></li>
<li>Steyerl (2016): <em>A Sea of Data: Apophenia and Pattern (Mis-)Recognition</em></li>
<li>Lecture notes from CS231n at Stanford</li>
<li>Sorting things out: Classification and its consequences</li>
</ul>
<h3 id="new-writing">New writing</h3>
<blockquote>
<p>a short piece of writing (approx. half a page / one page) that you are happy to share with the group. The writing doesn't need to be a summary of your thinking, it can be a new piece of writing that is responding to one of your key ideas... and it can be quite rough! It might be helpful to bring some text that you would like feedback on - whether it is the style of writing, the ideas contained within... or a combination of both.</p>
</blockquote>
Gradient Drawings2019-01-15T10:00:00Zhttps://maxkohler.com/posts/2019-01-15-gradient-drawings/<p>I started making machine drawings during my undergrad. Many of my early drawings where born from the excitement fo getting the machine to work at all. Suddenly, I could draw perfectly straight lines, repeat gestures hundreds of times and keep drawing for hours at a time. There's something oddly mezmerising about seeing a familiar drawing instrument (like a ballpoint pen) move in an unfamilliar way: slow, in long, straight lines and with even pressure.</p>
<p>Eventually I made another discovery: When I repeated a shape often enough (or stepped back far enough), the lines dissolved into shades of gray. I made some drawings that experimented with this, but they were never completely satisfying.</p>
<p>A few months after my graduation, an architect gave me a <a href="https://books.google.co.uk/books/about/Music_and_Architecture.html?id=fTYVAAAACAAJ&source=kp_book_description&redir_esc=y">1976 book</a> on Iannis Xenakis, the modernist composer. The book covers decades' worth of work, but I was struck by a drawing of his from early in Xenakis' career, when he was still working in Le Corbusier's studio in Paris.</p>
<p><img src="https://maxkohler.com/assets/xenakis/facade.png" alt="Xenakis facade drawings" />
Iannis Xenakis, table with progressions of rectangles with increasing widths drawn from the Modulor. Source: Fondation Le Corbusier, Paris. Reproduced in Sterken (2007).</p>
<p>It shows a series of designs for the facade of the monastery of Sainte-Marie de La Tourette in southern France. Each of the 18 designs is unique, yet they all seem to come from some common system.</p>
<p><img src="https://maxkohler.com/assets/xenakis/building.jpg" alt="Xenakis facade drawings" />
Iannis Xenakis and Le Corbusier: Sainte Marie de La Tourette. <a href="http://thesis.arch.hku.hk/2016/musi-tecture-architecture-informed-by-music/">Source</a></p>
<p>My understanding is that he derives these from a harmoic series based on the modulor series. High-resolution scans of Xenakis' drawings don't seem to exist (Even his own monograph suffers from poor reproductions), so it's hard to reverse-engineer the exact method he used to generate these patterns. the best I can tell, he's taking one number $x_0$, multiplying it with another one (presumably the golden ratio $\varphi \approx 1.618034$, since that's how the modulor is derived) to get a second one. The second number is multiplied with $\varphi$ again to obtain a third, and the process is repeated to create a series of $n$ numbers. In short:</p>
<p>$$x_n = x_{0} \varphi^n$$</p>
<p>Once the value of $x_n$ exceeds a certain threshold, he starts dividing by $\varphi$ instead of multiplying. Once $x_n$ becomes smaller than a certain value, he switches back to multiplication, and so on. I think he repeats this process for different values of $x_0$ and then combines the resulting series of numbers (by orderubg them by value) to obtain the final result.</p>
<p>The striking thing about Xenakis' method is that for all it's mathematical exactness and complexity, the result feels entirely natural. It is also worth pointing out that he's not really designing a facade: His project is really a <em>machine for making facades</em>. Between all the possible values of $x_0$, $n$ and $\varphi$, he opens up a <em>facade space</em> filled with an infinite number of points.</p>
<p>Xenakis' algorithm is relatively simple — yet it allows for essentially infinite variations. I saw in this a method to explore the tension between line and tone in my undergrad machine drawings in a structured way.</p>
<p>I began by writing <a href="https://codepen.io/maxakohler/full/WYbQqZ">a tool</a> that would allow me to generate vector drawings following Xenakis' algorithm. I begin with a zig-zag line across the top of the page. For the second line, I take the number of zig-zags in the first and multiply it by a fixed ratio $\varphi$. The third line is derived by multiplying the number of zig-zags in the second with $\varphi$, and so on. Like Xenakis, I switch to division once a certain threshold is reached.</p>
<p><img src="https://maxkohler.com/assets/xenakis/all.jpg" alt="Xenakis facade drawings" /></p>
<p>Xenakis (as far as I can tell) only used the Golden Ratio for his facades. I like using the 12 intervals in the harmonic scale, too.</p>
<table>
<thead>
<tr>
<th></th>
<th>Interval in C</th>
<th>Ratio</th>
<th>Ratio (1:x)</th>
<th>% of larger value</th>
<th>% of smaller value</th>
</tr>
</thead>
<tbody>
<tr>
<td>unison</td>
<td>C→C</td>
<td>1:1</td>
<td>1</td>
<td>100</td>
<td>100</td>
</tr>
<tr>
<td>minor second</td>
<td>C→D♭</td>
<td>15:16</td>
<td>1.067</td>
<td>0.9372071228</td>
<td>106.7</td>
</tr>
<tr>
<td>major second</td>
<td>C→D</td>
<td>8:9</td>
<td>1.125</td>
<td>0.8888888889</td>
<td>112.5</td>
</tr>
<tr>
<td>minor third</td>
<td>C→E♭</td>
<td>5:6</td>
<td>1.2</td>
<td>0.8333333333</td>
<td>120</td>
</tr>
<tr>
<td>major third</td>
<td>C→E</td>
<td>4:5</td>
<td>1.25</td>
<td>0.8</td>
<td>125</td>
</tr>
<tr>
<td>perfect fourth</td>
<td>C→F</td>
<td>3:4</td>
<td>1.333</td>
<td>0.7501875469</td>
<td>133.3</td>
</tr>
<tr>
<td>aug. fourth or dim. fifth</td>
<td>C→F♯/G♭</td>
<td>1:√2</td>
<td>1.414</td>
<td>0.7072135785</td>
<td>141.4</td>
</tr>
<tr>
<td>perfect fifth</td>
<td>C→G</td>
<td>2:3</td>
<td>1.5</td>
<td>0.6666666667</td>
<td>150</td>
</tr>
<tr>
<td>minor sixth</td>
<td>C→A♭</td>
<td>5:8</td>
<td>1.6</td>
<td>0.625</td>
<td>160</td>
</tr>
<tr>
<td>major sixth</td>
<td>C→A</td>
<td>3:5</td>
<td>1.667</td>
<td>0.599880024</td>
<td>166.7</td>
</tr>
<tr>
<td>minor seventh</td>
<td>C→B♭</td>
<td>9:16</td>
<td>1.778</td>
<td>0.5624296963</td>
<td>177.8</td>
</tr>
<tr>
<td>major seventh</td>
<td>C→B</td>
<td>8:15</td>
<td>1.875</td>
<td>0.5333333333</td>
<td>187.5</td>
</tr>
<tr>
<td>octave</td>
<td>C→C</td>
<td>1:2</td>
<td>2</td>
<td>0.5</td>
<td>200</td>
</tr>
</tbody>
</table>
<h2 id="notes">Notes</h2>
<ol>
<li>Owen Gregory (2011): <em><a href="https://24ways.org/2011/composing-the-new-canon">Composing the New Canon: Music, Harmony, Proportion</a></em></li>
<li>Sven Sterken (2007): <em>Music as an Art of Space: Interactions between Music and Architecture in the Work of Iannis Xenakis</em>. Available at <a href="https://core.ac.uk/download/pdf/34525212.pdf">core.ac.uk/download/pdf/34525212.pdf</a></li>
<li>Alex Ross (2010): <em>Waveforms: The singular Iannis Xenakis.</em> The New Yorker. Available at <a href="https://www.newyorker.com/magazine/2010/03/01/waveforms">newyorker.com/magazine/2010/03/01/waveforms</a></li>
</ol>
Passports as magical objects2019-01-15T10:00:00Zhttps://maxkohler.com/posts/2019-01-29-passports/<p>On the eve of Brexit, some thoughts on passports:</p>
<ul>
<li>They let you cross invisible lines on the ground.</li>
<li>Everyone only gets one (the rare person with two is envied, three or more make you a character from a spy novel). Every few years you go through a ritual in which the old one is destroyed following a certain protocol, and you're handed a new one.</li>
<li>They have invisible information embdedded in them</li>
<li>We hand them to border police sometimes, but I don't really know what they look at: The picture? Do they how tall you are? Do they have some way of visually checking the passport is genuine?</li>
<li>On a related note, what do automatic passport gates do? Presumably they could read all the information they need from the chip inside the passport, but instead tey make you open the passport and press it onto a surface for what feels like forever — what is the camera looking for? All we know is that if you have the right passport, the gates will open.</li>
<li>Some of them are <a href="https://www.passportindex.org/byRank.php">more powerful than others</a> (although <em>power</em> is vaguely defined).</li>
<li>Rich people can <a href="https://www.businessinsider.com/countries-where-you-can-buy-citizenship-residency-or-passport-2018-9?r=US&IR=T">buy them</a>.</li>
<li><a href="https://www.theatlantic.com/national/archive/2011/03/americas-great-passport-divide/72399/">Poor people don't have them</a></li>
</ul>
<p>Passports seem like the 21st century version of a magical <a href="https://en.wikipedia.org/wiki/Talisman">talisman</a> — they're ubiquitous, yet we have only vague ideas about what they do and how they work. Perhaps in line with the <a href="https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/">end of the enlightenment</a>.</p>
Digital Aesthetics2019-01-30T10:00:00Zhttps://maxkohler.com/posts/2019-01-30-digital-aesthetics/<h2 id="reading">Reading</h2>
<ul class="contains-task-list">
<li class="task-list-item"><input class="task-list-item-checkbox" disabled="" type="checkbox" /> <em><a href="https://west.slcschools.org/academics/visual-arts/documents/Laocoon.pdf">Towards a Newer Laocoon</a></em>, Greenberg</li>
<li class="task-list-item"><input class="task-list-item-checkbox" disabled="" type="checkbox" /> <em><a href="http://www.altx.com/remix.fall.2008/flusser.pdf">Towards a Philosophy of Photography</a></em>, Flusser</li>
<li class="task-list-item"><input class="task-list-item-checkbox" disabled="" type="checkbox" /> <em><a href="https://read.dukeupress.edu/differences/article-abstract/18/1/128/97676/The-Indexical-and-the-Concept-of-Medium?redirectedFrom=fulltext">Indexicality and the concept of medium specificity</a></em>, Doane</li>
<li class="task-list-item"><input class="task-list-item-checkbox" disabled="" type="checkbox" /> <em><a href="http://jonahsusskind.com/essays/Krauss_VideoNarcissism.pdf">Video the Aesthetic of Narcissism, Kraus</a></em></li>
<li class="task-list-item"><input class="task-list-item-checkbox" disabled="" type="checkbox" /> <em><a href="https://monoskop.org/images/a/ae/Bolter_Jay_David_Grusin_Richard_Remediation_Understanding_New_Media_low_quality.pdf">Remediation, Bolter and Grusin</a></em></li>
</ul>
<h2 id="friday%2C-febuary-1st-2019">Friday, Febuary 1st 2019</h2>
<ul>
<li><em>Empire</em> by Andy Warhol basically started <em>structuralist film</em></li>
<li>P. Adams Sitney coins the term, see also <a href="https://monoskop.org/images/6/65/Gidal_Peter_ed_Structural_Film_Anthology.pdf">Structural Film Antology</a> by Peter Gidal</li>
<li>A lot of structuralist film came out of the London film-makers co-op</li>
<li><em><a href="https://www.youtube.com/watch?v=axucqo_SNLo">Room Film 1973</a></em> by Peter Gidal. He's showing the same shot over and over again — back then a hugely laborious thing to do (you'd have to physically make copies of the film). This is <em>film about film</em>, exposing the underlying mechanics of film-making, or <em>film as film</em>.</li>
<li><em><a href="https://vimeo.com/17173209">Ray Gun Virus (1966)</a></em> by Paul Sharits</li>
<li><em><a href="https://www.youtube.com/watch?v=I7xoNWzm7PQ">Dresden Dynamo (1971)</a></em> by Rhodes Lis. Note that the sound comes from her drawing directly onto the film (which contains sound information in visual form).</li>
<li><em><a href="https://www.youtube.com/watch?v=LDj8Tc6259o">Berlin Horse (1970)</a></em> by Malcom Le Grice</li>
</ul>
<p>All of these people are making work about analogue film — digital images work entirely differently:</p>
<p><img src="https://maxkohler.com/assets/da/sensor-1.jpg" alt="Quantum Efficiency in a CMSO sensor" />
How a single pixel turns photons into an electrical signal. <a href="https://www.youtube.com/watch?v=_KMKYIw8ivc">Source</a></p>
<p><img src="https://maxkohler.com/assets/da/sensor-2.jpg" alt="Bayering" />
Multiple pixels, with RGB-coloured filters are needed to capture colour images. <a href="https://www.skyandtelescope.com/astronomy-resources/astrophotography-tips/redeeming-color-planetary-cameras/">Source</a></p>
<p>Cameras (especially phones) apply all kinds of digital optimisations to images: Simple things like bringing up the saturation, increasing the contrast, but also more advanced operations like smooting out skin, morphing bodyparts to make them look more "attractive".</p>
<p>When images are shared online, there's additional processes like scaling and compression. Any screen you look at a photograph on is doing its own, interanl optimisations to it.</p>
<p>All of this is to say, digital images have a limited link with reality (certainly not the chemical index that analogue material represents).</p>
<p>Let's make a digital film that explores some of the inherent characteristics of digital images (in the same way the structuralists made films abotu analogue film-making).</p>
<figure class="post-figure embed-container post">
<div class="embed-placeholder">
<p>
This page contains embedded content from <a href="https://vimeo.com/">Vimeo</a>, who might use cookies and other technologies to track you. To view this content, click <em>Allow Vimeo content</em>.
</p>
<button class="embed-load button">Allow Vimeo content</button>
</div>
<div class="embed" style="padding:56.25% 0 0 0;position:relative;">
<iframe data-src="https://player.vimeo.com/video/315109396?loop=1&color=ffffff&title=0&byline=0&portrait=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" frameborder="0" allow="autoplay; fullscreen" allowfullscreen=""></iframe>
</div>
</figure>
<h2 id="friday%2C-febuary-8th">Friday, Febuary 8th</h2>
<ul>
<li>Southpark: <em>It's not a news story</em></li>
<li>Southpark: <em>Ridding the world of ads</em></li>
<li>James Bridle: <em>Something is wrong on the internet</em>: This talks about creepy kids' Youtube videos. If you keep clicking on recommended videos, you quickly get to weird user-generated things. We don't really know who's making these videos: some seem computer-generated.</li>
<li><em>The new dark age</em></li>
<li>The reason anyone's watching this stuff is the recommendation engine.</li>
<li>You now get Youtube Kids and Netflix Kids.</li>
</ul>
<p>Let's step away from this stuff for a second.</p>
<ul>
<li>Exquisite Corpses in the Tate (By the Dadaists)</li>
<li>William Burroughs' Cut-up technique</li>
</ul>
<h3 id="predictive-text-poems">Predictive text poems</h3>
<p>Tap the first one 20 times.</p>
<blockquote>
<p>I think I have a good day and the other hand I think I have a good day and the other hand.</p>
</blockquote>
<p>Left, Middle, Right</p>
<blockquote>
<p>I have to say it but I think it's a good day at the moment is that you have a good time to be the first half and we have to say it but it is not just a good time.</p>
</blockquote>
<p>'John Berger', then left</p>
<blockquote>
<p>John Berger said that the first half and we have to say it but I think it's a good day and the other hand I think I have a good day and the other hand I think I have a good day and the other hand I think I have a good day and the other hand I think I have a good day and the other hand I think I have a good day</p>
</blockquote>
<p>Only middle suggestion:</p>
<blockquote>
<p>Tap the first half and we have to say it but I think it's a good day and the other hand I think I have a good day and the other hand I think I have a good day and the other hand I think I have a good day and the other hand</p>
</blockquote>
<p>Only right suggestion:</p>
<blockquote>
<p>is there anything I have a good day and the first time since his wife was in my head is that the other hand I think I have to be in my life I think I have a good day at work now I think I have a good day at the moment and we are going for it to the first time in my life and we are looking for a new role in the other day at the moment is that it but I think it's a good idea for it but it but it is not a problem with the first time since his arrival in the first half and we have to say that it is the first half of the year award and the first time since his arrival in the other day and I think I have a good day at work and we are looking for a new role as I have to be in the first half of the other hand I think I have to say that the other day</p>
</blockquote>
<blockquote>
<p>at the office of fair trading to be the other hand I think I have to be in the other day at the moment is that it but I think it's a bit of the year and the other hand I think I will go back and we are going to be the other hand I think I have to be in my life and the first time in a while ago and we are looking for a new job is the first half and we have to be in the first half and we have to be in the other hand I think I have a look and the first time since I have to be in my life I have to say that it but it but it but it but it but it but it but I think I will be in my life I have to be in my head and we</p>
</blockquote>
<p>'A', then only left suggestion</p>
<blockquote>
<p>And I think I have a good day and the other hand I think I have a good day and the other hand I think I have a good day and the other hand I think I have a good day and the other hand I think I have a good day and the other hand I think I have a good day and the other hand I</p>
</blockquote>
<ul>
<li><em>Sunspring</em> (2018): a bit contrived.</li>
<li>John Smith (1992): <em>Gargantuan</em></li>
</ul>
<p>Ways of subverting machines/systems:</p>
<ul>
<li>Also John Smith: Fresh Fruit Venerable</li>
<li>Dieter Kiessling (1998): <em>Two Cameras</em></li>
<li>Kicking the Boston Dynamics dog robot</li>
<li>Cory Arcangel: <em>Beat the Champ</em>. Not unlike people having their Amazon Echo and Google Home talk to each other.</li>
</ul>
<p>All of these tend to go back to the human creator: Look at was this genius artist made this machine do. <em>Sunspring</em> is (maybe) more of an equal collaboration. <a href="https://obvious-art.com/">Obvious Art</a> made the Sotheby's painting.</p>
<h2 id="friday%2C-febuary-22">Friday, Febuary 22</h2>
<p>This week: <em>Time</em> and the digital.</p>
<ul>
<li><em>Realtime</em> only becomes a thing when there's an alternative.</li>
<li>Early TV was <em>only live</em>. To show a film, they would point a TV camera at a projection of a film.</li>
<li>Annabel Nicolson (1973): <em>Reel Time</em>. She's using a sewing machine to manipulate the film strip as it's being projected, making a film that happens in <em>reel time</em>.</li>
<li>David Hall (1974): <em>Progressive Recession</em>: A room full of TVs with cameras on top, but each one goes to a different TV.</li>
<li>Nam June Paik (1965): <em>Nixon</em>. This is the thing in the Tate with the two TVs being distorted by electromagnets.</li>
<li>Nicky Hamlyn (2016): <em>Concentricss</em></li>
<li>Jennifer Ringley: The first reality-TV personality: <a href="https://en.wikipedia.org/wiki/Jennifer_Ringley">JenniCam</a>(1996).</li>
<li>The Academic: Bear Claw</li>
</ul>
<p>Recursive instagram story</p>
<h2 id="friday%2C-march-1">Friday, March 1</h2>
<ul>
<li>Hockney drawings on TV in the 80s and on iPad now</li>
<li>When you project a Hockney iPad drawing on your wall you can have an original Hockney in your room. He says that these images are meant to be accessible, but he hasn't got a website where they're all collected. INstead he emails them to select people and we scrape them off Google images.</li>
<li>Otto Ford makes (gorgeous) digital drawings, then does one print and throws away the file. Arguably this is shying away from the problem.</li>
<li><em>Art in the Age of Mechanical Reproduction</em></li>
</ul>
<p><img src="https://maxkohler.com/assets/ford.jpg" alt="Otto Ford" />
Otto Ford <a href="https://www.artrabbit.com/events/collider">Source</a></p>
<ul>
<li>Cindy Sherman <a href="https://www.instagram.com/cindysherman/">has an Instagram</a></li>
<li><a href="https://www.wmagazine.com/story/cindy-sherman-instagram-selfie">Facetime with Cindy Sherman</a>, in which she says that her Instagram work are essentially distractions, don't compete with her "serious" work</li>
<li>How do we value labour? Using an app in the same way Sherman and Hockney do doesn't take the same skill that doing a painting or printing a photograph do.</li>
<li>Maybe we can see Cindy Sherman's Instagram as readymade photography?</li>
<li><a href="https://www.tate.org.uk/whats-on/tate-britain/exhibition/rachel-maclean-wot-u-about">Rachel MacLean</a></li>
<li>In contrast: <em>Tango</em> (1980) by Zbigniew Rybczynski (which could now be made in a day, in the 80s this took 6 months).</li>
<li>Maybe the Sherman and Hockney digital images are a kind of pop art, like Warhol Soup Cans?</li>
<li><em>Shot on iPhone Campaign</em> doesn't show the product, but the process. Sherman doing the same thing (showing what it's like to use phone apps, not screenshots of them)</li>
</ul>
Typecast2019-01-30T10:00:00Zhttps://maxkohler.com/posts/2019-01-31-typecast/<p><a href="http://rcade.rca.ac.uk/pluginfile.php/69316/mod_resource/content/3/Expanded%20Practice%20-%20Spring%20Term%202019.pdf">Course outline</a></p>
<h2 id="reading">Reading</h2>
<ul class="contains-task-list">
<li class="task-list-item"><input class="task-list-item-checkbox" disabled="" type="checkbox" /> <em><a href="https://www.amazon.co.uk/Theory-Type-Design-Gerard-Unger/dp/9462084408/ref=sr_1_1?ie=UTF8&qid=1549217495&sr=8-1&keywords=Theory+of+Type+Design">Theory of Type Design</a></em>, Gerard Unger</li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> <em><a href="http://library.rca.ac.uk/client/2015/search/results?qu=Type+%26+Typography&te=ILS#">Type & Typography</a></em>, Phil Baines & Andrew Haslam</li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> <em>Dimensional Typography</em>, J. Abbott Miller</li>
<li class="task-list-item"><input class="task-list-item-checkbox" disabled="" type="checkbox" /> <em>Fuse 1–20</em>, Neville Brody & Jon Wozencroft</li>
<li class="task-list-item"><input class="task-list-item-checkbox" disabled="" type="checkbox" /> <em>Does Writing Have a Future?</em>, Vilém Fusser</li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> <em><a href="https://practicaltypography.com/drowning-the-crystal-goblet.html">Drowning the Crystal Goblet</a></em>, Matthew Butterick</li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> <em><a href="http://www.eyemagazine.com/blog/post/dimensional-typography">Dimensional Typography</a></em>, Leslie Atzmon</li>
<li class="task-list-item"><input class="task-list-item-checkbox" checked="" disabled="" type="checkbox" /> <em><a href="http://letterror.com/articles/is-best-really-better.html">Is Best Really Better?</a></em>, Erik van Blokland, Just van Rossum</li>
</ul>
<p><a href="https://docs.google.com/document/d/1ENURWyF-r-HDoEKIpoMzVs1wMuL6cijbLwurojOYcM8/edit">References Doc</a></p>
<p><a href="https://docs.google.com/document/d/14aemrAxaS6A9qcnpsV8QG-AJRa4DZpghiX0tYCDIYtY/edit">Enquiry Doc</a></p>
<h2 id="thursday%2C-january-31st-2019">Thursday, January 31st 2019</h2>
<p>The workshop is essentially about questioning type. how do we develop it, how is it displayed, what does it do. The work eventually goes into <em>Typographic Singularity</em> (Hopefully at Elephant West).</p>
<p>The brief is <em>Make a piece of typographic work that aadds some dimension</em>, such as</p>
<ul>
<li>Time (kinetic type)</li>
<li>Space (dimensional typography)</li>
<li>Data (generative stuff)</li>
<li>Interactivity</li>
</ul>
<p>The work needs to be based on a text, which can be:</p>
<ul>
<li>a location <strong>or</strong></li>
<li>a factual statement <strong>or</strong></li>
<li>a poetic statement <strong>or</strong></li>
<li>an opinion of yours.</li>
</ul>
<p>The outcomes can be speculative (yuch).</p>
<p>Week one is about subversion of tools and processes. If your tool is InDesign, question its assumptions (why is the page limited? Why does it give me default font choices) and subvert them.</p>
<h3 id="the-history-of-type-(slightly-abridged)">The history of type (slightly abridged)</h3>
<ul>
<li>Lettering ≠ Typography</li>
<li>Type is about systems, repetition, process</li>
<li>The first kind of writing is Cuneiform</li>
<li>Interestingly, Cuneiform can be applied to different spoken languages in the same way the Roman alphabet can.</li>
<li>Early written languages are essentially tools for bureaucracy: Most of teh clay tablets we have say stuff like <em>Farmer so-and-so has 12 goats, and owes 3 sacks of grain in taxes</em></li>
<li>Cuneiform is a 3d-language: the depth of the cuts carries information. This is why we 3d-scan them instead of photographing them</li>
<li>Next: The Romans</li>
<li>You can roughly draw this line to describe the development of the modern alphabet from the roman: Square capitals → Rustic Capticals → Unical → Carolingan Miniscule → Modern Writing</li>
<li>For about 400 years (from Rome to Gutenberg), writing was only done by trained monks.</li>
</ul>
<p><img src="https://maxkohler.com/assets/typecast/gutenberg.jpg" alt="36-line Bible" />
Gutenberg's 36-line Bible (1458-1460). <a href="https://commons.wikimedia.org/wiki/File:36-line_Bible.jpg">Commons</a></p>
<ul>
<li>Letterpress is great for Roman type, but other languages (Arabic, Asian languages) often have to be compressed, simplified to work in letterpress.</li>
<li>OpenType gives us way more power: We can essentially program all kinds of behaviour directly into our typefaces (such as contextual alternates).</li>
<li>Unicode is big enough to hold enormous charactersets (like you might need for Chinese), way more than a printer's typecase or Linotype keyboard.</li>
<li>Typefaces can advance social goals: <a href="https://www.aravrit.com/">Aravit</a> cobines Arabic and Hebrew script in such a way that speakers of both languages can read it.</li>
<li>Monospace typefaces exist to make typewriters work.</li>
<li><a href="https://ia.net/topics/in-search-of-the-perfect-writing-font">Duospace</a></li>
<li>in the 1970s, photosetting allows people to do all kinds of thinhgs that were impossible in letterpress: Stretch, compress, rotate, scale type freely. Fonts for photosetting had to have reverse ink-traps so they wouldn't look rounded off (because light would bleed around the edges).</li>
<li><a href="https://typographica.org/typeface-reviews/demos-next/">Demos</a> is a typeface that's inspired by the smoothed-over look you get from photosetting (also the kind of thing that would be impossible to cut into a punch).</li>
<li>Compare also <a href="https://frerejones.com/families/retina">Retina</a> and Charter (the phonebooks with the mad ink traps) and <a href="https://typographica.org/typeface-reviews/minuscule/">Miniscule</a> (a typeface that's designed to be readable at 2pt)</li>
<li>A whole aesthetic comes from dot matric printers and low-res LED-displays. LEDs of course are also very good at making type move.</li>
</ul>
<figure class="post-figure embed-container post">
<div class="embed-placeholder">
<p>
This page contains embedded content from <a href="https://youtube.com/">Youtube</a>, who might use cookies and other technologies to track you. To view this content, click <em>Allow Youtube content</em>.
</p>
<button class="embed-load button">Allow Youtube content</button>
</div>
<div class="embed" style="padding:91.96% 0 0 0;position:relative;">
<iframe data-src="https://www.youtube.com/embed/OsNmrCgwwQM" style="position:absolute;top:0;left:0;width:100%;height:100%;" frameborder="0" allow="autoplay; fullscreen" allowfullscreen=""></iframe>
</div>
<figcaption>
<span class="figure-caption">
<p>Poem Field No. 1 (1967) by Stan Vanderbeek</p>
</span>
</figcaption>
</figure>
<ul>
<li>Machine readable typefaces: OCR-A and OCR-B</li>
<li>Wim Crouwel (1967): <em><a href="https://www.moma.org/collection/works/139322">New Alphabet</a></em></li>
<li>Tomato: <a href="https://vimeo.com/239379931">Sony Corporate Identity</a></li>
<li>With LCD screens <em>hinting</em> starts to become a thing. Wonder how much that changes how we design, think about, read type. Verdana was a huge design effort, largely because of all the manual hinting.</li>
<li>Then in the 90s we start to get what we'd probably call Grunge.</li>
<li>Emigre, Fuse, Raygun Magazine</li>
<li>Brody: <em><a href="https://www.moma.org/collection/works/139325">FF Blur</a></em> (which is in the MoMA)</li>
<li>Also around the early 2000s: Experiments in interactive type</li>
<li><em><a href="https://www.moma.org/collection/works/139326">FF Beowolf</a></em> by Blokland and Rossum is probably the first typeface that has behaviour programmed into it: Each time you type a letter, the vector points are moved around by a randomised amount (within certain limits — these are the cuts of the typeface). This is essentially asking the question: Does every letter in a typeface always have too look the same (as it has done the entire history of printing)?</li>
</ul>
<h2 id="febuary-28%2C-2019">Febuary 28, 2019</h2>
<p>Nicky Hamlyn on text in film.</p>
<ul>
<li><em>Duchamp</em> (1910). In those days you couldn't buy a camera: You had to have one made. Rotating poems that repeat themselves, also wordplay and moving around of letters. <em>Anemic Cinema</em> (1926). See also Vertigo records in the 70s.</li>
<li>Also: <em>L.H.O.O.Q.</em> (1919)</li>
<li><em>Word Movie</em> (1966) by Paul Sharits. Like a lot of his early work, this is done frame-by-frame.</li>
<li><em>Don't look back</em> documentary on Bob Dylan.</li>
<li><em>Subterranean Homesick Blues</em>. Dropping cue cards.</li>
<li>Michael Snow <em>So Is This</em> (1982). <em>This film is two hours long: does that seem like a frightening prospect?</em> (The actual thing is 48 minutes long, of course this kind of falls apart when you watch it on Youtube). <a href="https://www.youtube.com/watch?v=iol8n3m88SA">Apple ripping it off</a>. Film about</li>
<li>Title sequence from Jean-Luc Goddard: <em>Pierro Le Fou</em>. Technicolor was shot on three rolls of 35mm black and white film with coloured filters, which would later be added together in printing. The red, blue and white is a Jean Luc-Goddard thing. Letters come in in alphabetic order.</li>
<li><em>Associations</em> (1975) by John Smith. Also: <em>Steve hates fish</em> (2015), which is the French > English translation thing.</li>
<li>Kurt Kren often uses simple technical ideas to produce interesting effects. <em>42/83 No Film</em> (1983): A collapsing of the title and the film. The thing announces it's a film, yet it can only exist as a film: negative image is a fundamentally photographic process.</li>
<li>Lost Highway opening sequence</li>
</ul>
<h2 id="march-7%2C-2019">March 7, 2019</h2>
<ul>
<li><a href="http://images.adsttc.com/media/images/58f5/63e3/e58e/cea0/5200/0057/newsletter/seoul-neo-brutalist-02.jpg?1492476891">Neo-Brutalism</a></li>
<li><a href="https://www.wmagazine.com/gallery/michael-brown-new-york-times-posters-alexandra-bell-brooklyn/all">A teenager with promise</a></li>
<li><a href="https://en.wikipedia.org/wiki/I_Am_Sitting_in_a_Room">I am sitting in a room</a></li>
</ul>
<p>Idea that a pretty basic mathematical process (Levenshtein) is mashing these different authors together. Original author, secondary author, the machine, the person typing are all interacting with the same text. The fundamental operation here is the <a href="https://en.wikipedia.org/wiki/Levenshtein_distance">Levenshtein distance</a>.</p>
<p>Next week: Continue the exploration (we're making the books). Two weeks later (on the 28th) is another session that's about what's actually going to be in the show.</p>
<p>I think me trying to find interesting text/letterform combinations is a bit contrived. The more compelling thing is to produce every possible combination of books (minus the originals) and finding out where interesting coincidences happen. In other words:</p>
<p><img src="https://maxkohler.com/assets/typecast/grid.svg" alt="grid" /></p>
<p>The number of books I need to make based on this principle is:</p>
<p>$$n_{\text{Books}} = (n_{\text{Sources}})^2 - n_{\text{Sources}} $$</p>
<p>So for 12 sources (which seems like a lot):</p>
<p>$$n_{\text{Books}} = 12^2 - 12 = 132$$</p>
<p>I guess the good thing is that I can just start making books (once I've written code to do that automatically) and keep adding sources until I run out of time.</p>
Drawing the line2019-02-13T10:00:00Zhttps://maxkohler.com/posts/2019-02-13-drawing-the-line/<p>I wrote about unpaid design projects (especially in art schools) on <a href="http://content-free.net/">Content Free</a>, the online publication of the visual communication programme at the RCA.</p>
<p>This is also the first time I've done editorial illustration (I think) — fun stuff.</p>
<p><a href="http://content-free.net/articles/drawing-the-line">Read the article here</a>.</p>
Who Counts? 2019-06-01T10:00:00Zhttps://maxkohler.com/posts/2019-06-01-the-hague/<h2 id="day-1">Day 1</h2>
<h3 id="non-human-publishing">Non-Human Publishing</h3>
<p>Wikipedia has bot users to update categories and things. MediaWiki has no special interface to indiciate that a user is non-human. Cydebot is the most prolific editor on Wikipedia at 5.1m edits.</p>
<p>We tend to think of Wikipedia as this big human endeavour, but there's actually lots of automated labour going on.</p>
<p><em>Even good bots fight: The case of WIkipedia</em> says that Wikipedia bots soemtimes get into edit wars.</p>
<p>Publishing is thought of as humans making something public for other humans, but it turns out WIkipedia bots and others also publish stuff all the time. This is not happening in the future.</p>
<p>Automated writing in the news business, like AP automating quarterly earnings report articles, Washington post local sports coverage. Interestingly the AP coverage increases trading -- being covered in the publication helps companies.</p>
<p>2016 flash crash (the pound dipped by 6% almost instantly). Turns out it was algorithmic traders. With the AP thing, you can imagine a bot writing a story, then bots trading on that story using sentiment analysis, then more stories etc.</p>
<p>So, bots are entangled with humans, and also each other. We also arguably publish for bots by feeding data into them. See GPT-2.</p>
<p>Barthes: Deat of the author</p>
<p>[Project Debater]
An IBM project that chains together a language model, language comprehension and knowledge graphs. Trained on Aristotle?</p>
<h2 id="day-2">Day 2</h2>
<p>BUreau of artificial intelligence</p>
<p>how do you feel about abortion
it's your views that matter</p>
<p>can i get an abortion
(gives )</p>
<ul>
<li>Sadie Plant: <em>Zeros and Ones</em> (1997) on the history of techno-feminism. Automation/Computation + Feminist theory</li>
<li>Please re-type password to access order details. {topic=order-password} <add replyCount="1"></add></li>
</ul>
Archiver2019-09-29T10:00:00Zhttps://maxkohler.com/posts/2019-09-29-archiver/<h2 id="30-september%2C-2019">30 September, 2019</h2>
<p>Sekula:</p>
<blockquote>
<p>The central Artefact of [Bertillion's] archive is the filing cabinet, not the photograph.</p>
</blockquote>
<p>Archive as a body of literature independent of the objects inside it. All the descriptions, titles (written by archivists, not artists), but also categories, access status, reference numbers and so on.</p>
<p>Tate Archive contains different <em>collections</em>:</p>
<ul>
<li>TG - Tate Public Records</li>
<li>TGA - Tate Archive Collections</li>
<li>TAM - Tate Archive Collections on Microfiche</li>
<li>TAP - Posters Collection</li>
</ul>
<p><a href="https://www.theatlantic.com/technology/archive/2018/07/microfilm-lasts-half-a-millennium/565643/">Microfilm Lasts Half a Millennium</a> in the Atlantic</p>
<p>The search function has <a href="http://archive.tate.org.uk/DServe/Searchhelp.htm#words">boolean logic</a>: Heading must contain (term A AND term B) or (term C). Archive arithmetic.</p>
<p>There is a big long list of uncatalogued items. Must be hundreds, of not thousands of boxes. http://archive.tate.org.uk/TateArchiveUncatCollList.pdf</p>
<p>Things are organised on different levels:</p>
<ul>
<li>Fonds: the top-level record which gives an overview of the contents of a collection</li>
<li>Sub-Fonds</li>
<li>Series</li>
<li>Sub-Series</li>
<li>File</li>
<li>Item: record containing information about one or more specific items</li>
<li>Singleitem: a record describing a collection which only contains a single item</li>
</ul>
<p>Found some kind of test item <a href="http://archive.tate.org.uk/DServe/dserve.exe?dsqServer=tdc-calm&dsqIni=Dserve.ini&dsqApp=Archive&dsqCmd=Show.tcl&dsqDb=Catalog&dsqPos=0&dsqSearch=%28%28%28text%29%3D%27*%27%29AND%28Level%3D%27Piece%27%29%29">here</a>.</p>
<p>A search for 'test' gives a few more of these. There doesn't seem to be a way to link to a search results page.</p>
<p>The website runs on something called DServe, which seems discontinued. Used to be made by Axiell.
https://www.axiell.com/uk/solutions/archiving-software/</p>
<p>They sell all kinds of stuff for libraries and museums, including things like post-it notes.</p>
<h2 id="numbers">Numbers</h2>
<ul>
<li>720 Fonds (Collections)</li>
<li>53 Sub-Fonds</li>
<li>2225 Series</li>
<li>4331 Sub-Series</li>
<li>25983 Files</li>
<li>92720 Items</li>
<li>1636 SingleItems</li>
<li>192 Pieces</li>
</ul>
<p>The photographic collection (TGA) has images of all kinds of exhibitions, including random people's private collections.
http://archive.tate.org.uk/tgaphotolists/TGAPHOTO7PrivateAndCorporateCollections.pdf</p>
<p>Some documentation on the digitisation process: https://www.tate.org.uk/art/archive/archives-access-toolkit</p>
<p>Empty pages: Stuff that gets archived kind of by accident. Things between the historically important stuff.</p>
<p>Francis Bacon drawing that's almost nothing. https://www.tate.org.uk/art/archive/items/tga-9810-4/bacon-incomplete-letter-with-drawn-lines</p>
<ul>
<li>"Incomplete letter with drawn lines"</li>
<li>"Extract from unidentified boxing magazine with photograph of Jack Dempsey and Gene Tunney"</li>
<li>"Page of text"</li>
<li>"the page is black except for undistinguishable ink mark"</li>
</ul>
<p>People wrote so many letters back in the day -- are we keeping artists' emails now?</p>
<p>Looks like things are kept in thr arrangement in which they're acquired (Collections). So, for instance, TGA 871 has some stuff from JMW Turner's studio, but also letters from the 1960s and someone's MA dissertation. Even when a collection comes directly from an artist, it often contains ephemera, letters from other people, reproductions of work, newspaper clippings and such. A collection of collections of collections.</p>
<ul>
<li>TGA 898/1/1</li>
</ul>
<p>[see also Manovich re: re-arranging of pre-existing cultural material]</p>
<p><img src="https://maxkohler.com/assets/archiver/TGA-8421-1-6-6_10.jpg" alt="TGA 8421-1-6-6_10" />
TGA 8421-1-6-6_10</p>
<p>Blank Pages</p>
<ul>
<li>Front of postcards</li>
</ul>
<p>The images have titles and other meta information in the EXIF fields. Digital image: more than just a visual record, has metadata baked right into it.</p>
<p>Who picks the featured images? I guess some of the goal here is to generate engagement (that's also what the process writing talks about).</p>
<p>Archive items are tied in / cross-referenced with other Tate data structures: Artists, finished works (which appear to live in a separate database, interesting where the distinctino lies there), "Features" (which are articles), and related artists (chosen by who knows what algorithm), tags.</p>
<p>Some things have lost all their context:</p>
<ul>
<li>TGA 779/8/94: Photograph of an unidentified man (1920-1960)</li>
<li>TGA 779/8/111: Photograph of an unidentified building (1920-1960)</li>
</ul>
<p>Everything has different licensing attached to it. Every item wound up in countless systems, paper trails</p>
<h2 id="2-october%2C-2019">2 October, 2019</h2>
<p><img src="https://maxkohler.com/assets/archiver/records-cont.PNG" alt="Records Continuum" />
<a href="https://en.wikipedia.org/wiki/Records_Continuum_Model">Records Continuum Model</a> after Upward.</p>
<h2 id="9-october">9 October</h2>
<p>Spent an hour photographing shelves in the store — about 800 images in total, some usable.</p>
<p><img src="https://maxkohler.com/assets/archiver/S07B5579.jpg" alt="Archive Shelf" /></p>
<ul>
<li>On labels: One archivist passing knowledge onto the next one. The archive as a body of writing / knowledge.</li>
</ul>
<h2 id="10-october">10 October</h2>
<p>Borges (1993): <a href="https://ccrma.stanford.edu/courses/155/assignment/ex1/Borges.pdf">The Analytical Language of John Wilkins</a>:</p>
<blockquote>
<p>These ambiguities, redundancies and deficiencies [in Wilkin's constructed language] remind us of those which doctor Franz Kuhn attributes to a certain Chinese encyclopaedia entitled 'Celestial Empire of benevolent Knowledge'. In its remote pages it is written that the animals aredivided into: (a) belonging to the emperor, (b) embalmed, (c) tame,(d) sucking pigs, (e) sirens, (f) fabulous, (g) stray dogs, (h)included in the present classification, (i) frenzied, (j) innumerable,(k) drawn with a very fine camelhair brush, (l) et cetera, (m) having just broken the water pitcher, (n) that from a long way offlook like flies.</p>
</blockquote>
<p>Obviously this isn't real (because that's what Borges does).</p>
<p>Foucault comments on the passage at length in <em>The Order of Things</em> (1966) saying the reason this is so funny/uncomfortable is that we can't imagine a space in which all of these categories can exist at the same time — they have no shared criteria, rules of <em>sameness</em>.</p>
<h2 id="14-october">14 October</h2>
<ul>
<li>Can manipulate the number of catalogue entries shown per page by passing <code>&dsqNum=50</code> url parameter.</li>
<li>Completely scraped the online catalogue. Took about two days to write the puppeteer script — multiple runs required to merge top-level entries with lower level ones. Actual running time probably 3-4 hours. The final JSON file is 10.7MB — less than I expected.</li>
<li>The online catalogue only goes two levels deep: Collections (of which there are 720) and whatever sits directly below them in the hierarchy. Sometimes these are <code>items</code>, but many are <code>series</code> or <code>files</code> containing more items for which no catalogue entries exist (at least no public ones).</li>
<li>Now that I have the data in a structured format, I can analyse it locally much more effectively.</li>
<li>There is a huge discrepancy in the numbers of items I've scraped and the numbers I gathered on <a href="https://maxkohler.com/posts/2019-09-29-archiver/#30-september-2019">September 30</a> by searching the online catalogue.</li>
</ul>
<h2 id="30-october">30 October</h2>
<p>Fairly straightforward to go through all the second-level entries in my dataset and output lists of fields like <code>Access Conditions</code> and <code>Acquisition History</code>. Visually, this starts to resemble works like <em><a href="http://www.paglen.com/?l=work&s=codenames&i=3">Codenames</a></em> (2001) by Paglen, <em><a href="https://www.aiweiwei.com/projects/5-12-citizens-investigation/name-list-investigation/index.html">5.12 Citizen's Investigation</a></em>, and the <a href="https://www.theguardian.com/world/2018/jun/20/the-list-europe-migrant-bodycount">list of migrants who died on their way to Europe</a>.</p>
<p><img src="https://maxkohler.com/assets/archiver/list.png" alt="List of access conditions" />
List of access conditions</p>
<ul>
<li>I like Paglen's piece for how abstract it is — the work uses journalistic methods, but the mode of address is more subtle.</li>
<li>He's using that typeaface and centered columns to suggest war memorials</li>
</ul>
<p><img src="https://maxkohler.com/assets/archiver/list-2.png" alt="List of extents" />
Scraped catalogue rendered as a long table. Note that not all data fields are represented here.</p>
<p>The archive as a layering of tables (using Foucaults notion of that term). Tables within tables within tables.</p>
<h2 id="november-16">November 16</h2>
<h2 id="montford-(2003)%3A-twisty-little-passages">Montford (2003): Twisty little passages</h2>
<p><img src="https://maxkohler.com/assets/archiver/if-1.jpg" alt="Screenshot showing first iteration of IF work" /></p>
<ul>
<li>We're making a text adventure because the primary material of the archive is text. Therefore it's an appropriate medium to explore that corpus - I don't need to translate or "visualise" anything.</li>
<li>Also allows me to positively make something instead of doing just criticism for which I'm unqualified anyway.</li>
<li>In the game, you play the archivist. Your goal is to find a certain item in the archive. The game world (or "map", which brings to mind Borges) is closely modeled on the real Tate archive.</li>
<li>A large part of the language in the game comes from the Tate archive catalogue (which I've scraped). You'll be able to access any item on any shelf in the game and get information about it.</li>
<li>The game is a way of approaching architecture, access, archival stationary, hierarchy at the same time.</li>
</ul>
<h3 id="preface">Preface</h3>
<p>Text adventures are a textual representation of some imagined game world (not unlike the archive itself).</p>
<blockquote>
<p>The setting of an interactive fiction work [...] is more than a setting. It is a simulated <em>world</em>, which in practice is represented computationally in some sort of data structure or collection of objects. It is this simulated world that distinguishes a work of interactive fiction from a conversational character or from an expert system that employs natural language understanding.
viii</p>
</blockquote>
<ul>
<li>Two components: Parser and World Model</li>
</ul>
<blockquote>
<p>The world model is typically implemented in the interactive fiction program as some type of graph [referring to the mathematical model, which also aplies to the archive!] or tree structures of some sort (eg record, object, list) with associated procedures, methods, or functions (Graves 1987).
ix</p>
</blockquote>
<h3 id="1%3A-the-pleasure-of-the-text-adventure">1: The pleasure of the text adventure</h3>
<blockquote>
<p>The person who reads and writes to interact is the "operator" of an interactive fiction in cybertextual terminology (Aarseth 1997); in general computing terms, this person is the "user". So as to emphasize that the actions of reading, writing, playing, and figuring out are all involved in such operation or use, the term "interactor" is used in this book.
3</p>
</blockquote>
<ul>
<li>Parsers can range from "verb OR verb noun" to full NLP systems.</li>
<li>IF can be understood as literature, game, software.</li>
<li>Oulipo notion of <em>potential literature</em></li>
<li>One interaction with the program is called a <em>session</em>, a transcript of that interaction (including both the program and the interactor) is called a <em>session text</em>.</li>
<li><em>Diegetic</em>, <em>extradiegetic</em> and <em>hyperdiegetic</em> texts coexist in interactive fiction, ie:
<ul>
<li><em>go west</em> from the interactor is a diegetic <em>command</em></li>
<li><em>save game</em> extradiegetic <em>directive</em></li>
<li><em>You are standing in a field</em> from the program diegetic <em>reply</em></li>
<li><em>I didn't understand that word</em> extradiegetic <em>report</em></li>
</ul>
</li>
<li>Different levels of narration breaching each other is called <em>metalepsis</em>.</li>
<li>A <em>traversal</em> of a work is a <em>course</em> that extends from the <em>inital situation</em> (the first thing the program writes on the screen) to a final reply. Also a term in graph theory, which makes sense.</li>
</ul>
PanAm's World2019-09-29T10:00:00Zhttps://maxkohler.com/posts/2019-09-29-pan-ams-world/<figure class="post-figure small">
<img alt="A 1971 poster for PanAm shows men on horseback against a sunset. Text reads 'Argentina / PanAms World'." loading="lazy" src="https://maxkohler.com/assets/geismar.jpg" />
<figcaption>
<span class="figure__caption">
<p>Chermayeff & Geismar (1971): Poster from “PanAm’s World” Campaign.</p>
</span>
<span class="figure__source">
<p><a href="http://eyemagazine.com/feature/article/flight-of-the-imagination">Eye Magazine</a></p>
</span>
</figcaption>
</figure>
<p>When I was an undergraduate student, a relative gave me this big coffee table book about airline visual identities. It documents the visual output of the 20th century aviation industry in all its glory: Full-bleed Kodachrome photography overlaid with tightly-set Helvetica at <a href="http://www.eyemagazine.com/feature/article/flight-of-the-imagination">PanAm</a>. Colourful, bold illustrations and lettering, printed in stone lithography well into the 1950s at <a href="https://image.jimcdn.com/app/cms/image/transf/none/path/s845a70f74d8b0138/image/i72f33aa726ac2f98/version/1545933097/image.jpg">Air France</a>. The cool functionalism of HfG Ulm at <a href="http://ravenrow.org/exhibition/the_ulm_model/">Lufthansa</a>.</p>
<p>I love looking at these images. But as I do so now, at a time when the climate crisis has (rightfully) become the subject of almost daily news coverage, I’m also acutely aware of the ruinous impact commercial aviation has on the world. It contributes around 2.5% to global CO2 emissions<a href="https://www.nytimes.com/2019/09/19/climate/air-travel-emissions.html">, a figure which is rising sharply,</a> not to mention countless ways aircraft noise and pollution causes misery around the world, and the damage caused by its various supply chains.</p>
<p>Most of the material in the book was published around the middle of the century, when the science around global heating was already well-understood, and we still had the chance to avert most of the damage — <a href="https://www.nytimes.com/interactive/2018/08/01/magazine/climate-change-losing-earth.html">a chance which we squandered</a>.</p>
<p>Against this background, doing any design work for the aviation industry (like those beautiful posters reproduced in the book) seems morally impossible to me. This was a painful realisation: Throughout my education, <em>designing an airline</em> was always one of those crowning achievements waiting at the end of a successful career. That aspiration is gone.</p>
<p>But the trouble doesn’t end there: Since I’ve had that realization a few months back, the list of industries that seem at least questionable to work for in light of the climate emergency has been getting longer and longer. First, I added fossil fuel corporations, car manufacturers, shipping companies and the like. Then it became clear that the tech industry doesn’t have a great record either: The Co2 emissions of the world’s data centres already rival those of the aviation industry, and <a href="https://www.nature.com/articles/d41586-018-06610-y">are likewise rising</a>. And, of course, Google is directly <a href="https://www.theguardian.com/environment/2019/oct/11/google-contributions-climate-change-deniers">funding climate-denying thinktanks</a>, apparently to save taxes. What about cultural institutions who are increasingly relying on these industries for funding? Universities <a href="https://peopleandplanet.org/university/129827/ul19">who refuse to divest</a>?</p>
<p>This is where I’m beginning to think that something bigger has been lost here. It’s not just that I can’t imagine doing design work for an airline; I can’t imagine designing anything as optimistic, openly in favour of consumption, excited about technological progress as the PanAm poster <em>for anyone.</em> Graphic design was never innocent in the destruction of the planet, but over the past few months I’ve felt more viscerally guilty than ever before.</p>
<hr />
<p>The usual response to concerns about the environmental damage cause by graphic design are technological fixes: Printing with soy-based inks on recycled paper, using lighter typefaces to save on ink, or moving from print to digital: <em>Please consider the environment before printing this email</em><sup class="footnote-ref"><a href="https://maxkohler.com/posts/2019-09-29-pan-ams-world/#fn1" id="fnref1">1</a></sup>.</p>
<p>These ideas are no doubt well-intentioned, but ultimately they're incremental improvements to a deeply broken system — equivalent to replacing petrol cars with electric ones, or plastic straws with cornstarch. At best, what these proposals achieve is shift <em>which</em> natural recources we’re going to waste on consumerism.</p>
<p>The only way I can see to make graphic design truly sustainable is to make significantly <em>less of it</em>. Smaller print runs, less packaging, fewer, lighter websites, less advertising, less <em>stuff</em></p>
<figure class="post-figure small">
<img alt="Plastic debris is washed up on a beach." loading="lazy" src="https://maxkohler.com/assets/pacific.jpg" />
<figcaption>
<span class="figure__caption">
<p>Some visual communication in the Pacific</p>
</span>
<span class="figure__source">
<p><a href="https://www.flickr.com/people/48889057888@N01">Kevin Krejci</a>, <a href="https://creativecommons.org/licenses/by/2.0/">CC BY 2.0</a></p>
</span>
</figcaption>
</figure>
<p>It's depressing and scary to be a young worker coming into an industry that, like many other industries, needs to shrink or otherwise transform itself radically to limit the damage it causes to our planet.</p>
<p>The reason it’s so scary is that my entire understanding of economics, of how you're supposed to become successful in the world, taught to me by my parents and teachers, comes from the same era as those PanAm posters: <em>Success equals growth, bigger equals better, technology will save the day</em>.</p>
<p>Success in graphic design is measured following the same logic: Whoever gets to work with the largest print budgets, the biggest brands, whoever gets the most views, whoever shows their work in the most countries is the most succesful.</p>
<p>But with every news story about a wild fire, heatwave, storm, or flood, that model looks more untenable. Smarter people than me are working on alternative models for design, and cultural production in general. In <em>Duty Free Art</em><sup class="footnote-ref"><a href="https://maxkohler.com/posts/2019-09-29-pan-ams-world/#fn2" id="fnref2">2</a></sup>, Hito Steyerl writes:</p>
<blockquote>
<p>The contrary [to current ways of doing design] is a process that doesn’t grow via destruction, but very literally de-grows constructively. This type of construction is not creating inflation, but devolution. Not centralized competition but cooperative autonomy. Not fragmenting time and dividing people, but reducing expansion, inflation, consumption, dept, disruption, occupation, and death.</p>
</blockquote>
<p>Intellectually, I know that what she and others are proposing is true, necessary, and probably without alternative. But emotionally, I’m not ready to feel hopeful about that new form of design practice: I’m not done grieving the end of the old one.<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2019-09-29-pan-ams-world/#fn3" id="fnref3">3</a></sup></p>
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>I’m not aware of any similar ideas for screen-based design, maybe because we tend to assume it’s cleaner by default? It isn’t: See datacenter emissions. <a href="https://maxkohler.com/posts/2019-09-29-pan-ams-world/#fnref1" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn2" class="footnote-item"><p>Hito Steyerl (2019): <em>Duty Free Art: Art in the Age of Planetary Civil War</em>, p. 18. Verso Books <a href="https://maxkohler.com/posts/2019-09-29-pan-ams-world/#fnref2" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn3" class="footnote-item"><p>This essay was first published on <a href="http://content-free.net/articles/panams-world">Content Free</a>. <a href="https://maxkohler.com/posts/2019-09-29-pan-ams-world/#fnref3" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
Type Methods2019-10-02T10:00:00Zhttps://maxkohler.com/posts/2019-09-29-type-methods/<h2 id="7-november">7 November</h2>
<p>Hoping to develop a typeface out of this lettering I did:</p>
<p><img src="https://maxkohler.com/assets/posters/ploughshares.jpg" alt="Cars to ploughshares poster" /></p>
<p>Maybe also take some stuff from <a href="http://awesomephant.github.io/node-paint/">Node Paint</a> (extended pen nib).</p>
<p><img src="https://maxkohler.com/assets/type-methods/sketch-1.jpg" alt="Lettering sketch" />
<img src="https://maxkohler.com/assets/type-methods/sketch-2.jpg" alt="Lettering sketch" /></p>
<h2 id="14-november">14 November</h2>
<p><img src="https://maxkohler.com/assets/type-methods/grid.jpg" alt="Lettering animation frame" /></p>
<p>Above might be useful for bold weights down the line.</p>
<h2 id="15-november">15 November</h2>
<p><img src="https://maxkohler.com/assets/type-methods/hamburg-1.jpg" alt="Hamburg 1" /></p>
<ul>
<li>Like the contrast in the <em>H</em> and <em>A</em>.</li>
<li>Horizontal stress</li>
</ul>
<h2 id="17-november">17 November</h2>
<p><img src="https://maxkohler.com/assets/type-methods/Capture-3.PNG" alt="Type sample" /></p>
<p>V1 of the uppercase nearly finished. I'm trying to keep the wiggles in the same position vertically, so I can eventually kern the letters into each other like I did in the original lettering. Consistency is good, but might lead to the letters looking too similar (boring).</p>
<p>The number of wiggles (left-right-left from top to bottom) makes the letters quite busy — maybe number of squiggles could relate to optical size? I like them in the <em>O</em> and <em>R</em>, much too pointy in "W" and "A". Weight also inconsistent.</p>
Junk City2020-02-29T10:00:00Zhttps://maxkohler.com/posts/2020-02-29-junk-city/<p><span class="leadin">“Architecture disappeared in the twentieth century”</span>, writes Rem Koolhaas in the opening paragraph of <em>Junkspace</em> (2001)<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fn1" id="fnref1">1</a></sup>. It was replaced, he continues, by <em>Junkspace —</em> a kind of built environment that isn’t really designed but kind of just <em>happens</em> when you throw together a load of drywall, prefab concrete slabs, venture capital, air-conditioning, elevators, and vinyl stickers and hot-glue everything into the shape of an office complex. Koolhaas sees in this not an abberation, but the dominant form of contemporary building, “the essence, the main thing”.</p>
<p>Construction on Garden House, the RCA’s temporary White City campus, began in 2001<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fn2" id="fnref2">2</a></sup> — the same year Koolhaas’ text was first published in the <em>Harvard Guide to Shopping.</em> Based on the timeliness of its publication, and the fact that it manages to put into words all the dread this building induces in me, I propose <em>Junkspace</em> as the unofficial companion essay to RCA White City; RCA White City as the companion building to <em>Junkspace</em>.</p>
<h2 id="endless-space">Endless Space</h2>
<figure class="post-figure post">
<img alt="Photograph of office-like interior of RCA White City" loading="lazy" src="https://maxkohler.com/assets/junk-city/conditional-space.jpg" />
<figcaption>
<span class="figure__caption">
<p>Continuity is the essence of Junkspace</p>
</span>
<span class="figure__source">
<p><a href="https://www.rca.ac.uk/study/facilities-support/our-campus/rca-white-city/">Source</a></p>
</span>
</figcaption>
</figure>
<p>In a text published a few decades earlier<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fn3" id="fnref3">3</a></sup>, Koolhaas describes how electricity, elevators, and air conditioning moved from the magic shows, rides, illusions, and scams of Coney Island (where they orginated) into Manhatten in the 1870s, where they allowed the endless, upward expansion of the skyscraper. Junkspace, writes Koolhaas, relies on the same set of technologies, but expansion now happens in every direction, all at once, for its own sake:</p>
<blockquote>
<p>Continuity is the essence of Junkspace, it exploits any invention that enables expansion, deploys the infrastructure of seamlessness: escalator, air-conditioning, sprinkler, fire shutter, hot-air curtain … It is always interior, so extensive that you rarely perceive limits; it promotes disorientation by any means [...]</p>
</blockquote>
<p>Consider Garden House: Three virtually endless corridors stacked on top of each other, plus a lobby. There is no discernible reason that the building should end where it does — maybe it’s just where the money ran out. The rows of desks may as well continue in all three directions forever, supplied with filtered air, light and electricity through service lines running behind plastic tiles above and below.</p>
<p>But Garden House is also part of a much larger junkspace, ever metastasizing: The back door opens into a paved garden, which leads into a second, larger stack of corridors, which leads into a miniature strip-mall of fake regional restaurants, followed by a kind of parade ground patrolled by men in purple corporate uniforms<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fn4" id="fnref4">4</a></sup>. The Junk-Mothership (Westfield) looms just a few minutes down the road.</p>
<h2 id="conditional-space">Conditional Space</h2>
<figure class="post-figure post">
<img alt="" loading="lazy" src="https://maxkohler.com/assets/junk-city/wc-2.jpg" />
<figcaption>
<span class="figure__caption">
<p>Transparency only reveals everything in which you cannot partake</p>
</span>
<span class="figure__source">
<p><a href="https://www.rca.ac.uk/study/facilities-support/our-campus/rca-white-city/">Source</a></p>
</span>
</figcaption>
</figure>
<blockquote>
<p>Because it costs money, is no longer free, [air]conditioned space inevitably becomes conditional space; sooner or later all conditional space turns into Junkspace.</p>
</blockquote>
<p>RCA/CSM/CCA/LCC/LHR/LGW<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fn5" id="fnref5">5</a></sup> are conditional spaces: Pay for your immigration paperwork, ESL certificates, tuition and library fees, and you’re allowed past card readers and security guards into the building. Stop paying any of those, and your passport, access pass, and library card will be turned off remotely in a matter of hours.</p>
<p>When space becomes conditional, keeping out anyone who doesn’t fulfil the conditions becomes the primary objective. The ever-present cardreader (accepting visa, mastercard, amex, student id, visitor pass, and library card) may be the most obvious built manifestation of this. But there are countless other pieces of junk serving the same purpose: security desks, lobbies, sliding doors of various descriptions, cameras, frosted glass, numbering systems, reports, the <em>student helper form</em> (all visits must be transactional), sign-in sheets, visitor's passes.</p>
<p>When I invited two friends into the studio recently, it took the better part of an afternoon and corruption on multiple levels to get them past the security desk. Visitors are only tolerated for limited periods, alumni are only welcome with their credit cards clearly visible.</p>
<h2 id="temporary-space">Temporary Space</h2>
<figure class="post-figure post">
<img alt="" loading="lazy" src="https://maxkohler.com/assets/junk-city/wc-3.jpg" />
<figcaption>
<span class="figure__caption">
<p>Junkspace is additive, layered and lightweight, quartered the way a carcass is torn apart</p>
</span>
<span class="figure__source">
<p><a href="https://www.rca.ac.uk/study/facilities-support/our-campus/rca-white-city/">Source</a></p>
</span>
</figcaption>
</figure>
<p>Junkspace is temporary. The economics that compel electronics manufacturers to make sure your phone breaks after a couple of years apply to construction, as well. Like phones, these buildings are designed to be consumed, ditched, replaced, and re-consumed in short intervals.</p>
<blockquote>
<p>Junkspace is additive, layered and lightweight, quartered the way a carcass is torn apart — individual chunks severed from a universal condition. There are no walls, only partitions, shimmering membranes frequently covered in mirror or gold […] Where once detailing suggested the coming together, possibly forever, of disparate materials, it is now a transient coupling, waiting to be undone, unscrewed, a temporary embrace with a high probability of separation; no longer the orchestrated encounter of difference, but the abrupt end of a system, a stalemate. […] While whole millenia worked in favor of permanence, axialities, relationships and proportion, the program of junkspace is escalation. Instead of development, it offers entropy.</p>
</blockquote>
<p>Everything about Garden House is provisional: None of the interior walls are load-bearing, so they can be moved as market forces dictate. A <em>Making Space</em> can be turned into a <em>Smart Zone</em> overnight by applying a few vinyl stickers and replacing a couple of technicians (zero-hour contracts make that a simple operation). The heavy, hard-to-move equipment like printing presses and machine tooling is kept at other campuses, as if to prevent their material permanence rubbing off onto Garden House<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fn6" id="fnref6">6</a></sup>. The whole place feels precarious, as if it may cease to exist at any moment.<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fn7" id="fnref7">7</a></sup>]</p>
<h2 id="consumed-space">Consumed Space</h2>
<figure class="post-figure post">
<img alt="" loading="lazy" src="https://maxkohler.com/assets/junk-city/wc-email.png" />
<figcaption>
<span class="figure__caption">
<p>Because it is endless, it always leaks somewhere in Junkspace</p>
</span>
</figcaption>
</figure>
<p>I’ve lost count of how many times the heating has been broken at Garden House. Electric heaters are scattered around the studio, artefacts of ice ages past. A constant state of disrepair is no accident, but a defining feature of Junkspace:</p>
<blockquote>
<p>Because it is endless, it always leaks somewhere in Junkspace; in the worst case, monumental ashtrays catch intermittent drops in gray broth […] Because it is so intensely consumed, Junkspace is fanatically maintained, the night shift undoing the damage of the day shift in an endless Sisyphean replay. As you recover from Junkspace, Junkspace recovers from you: between 2 and 5am, yet another population, this one heartlessly casual and appreciably darker, is mopping, hoovering, sweeping, toweling, resupplying.</p>
</blockquote>
<p>This endless recovery loop is mediated by a constant stream of language “woven through [Junkspace’s] texture of canned euphoria”. Your email account is probably full of it: Apology after apology (“Sorry for any inconvenience caused by the broken heating/clogged toilet/Prince of Wales”) from the Buildings and Estates department, mixed with the occasional threat to bin your belongings if not removed by such and such a date.</p>
<h2 id="how-to-survive-junkspace">How to survive Junkspace</h2>
<figure class="post-figure small">
<img alt="Meme in wich a photo of an art installation under a staircase in White City is superimposed over a screenshot of the combat-screen in the Pokemon video game. Caption reads: Wild Installation appeared!" loading="lazy" src="https://maxkohler.com/assets/junk-city/jc-meme.png" />
<figcaption>
<span class="figure__caption">
<p>Public space is the space of transgression</p>
</span>
</figcaption>
</figure>
<p>Once you recognise that Garden House and its surroundings are Junkspace, the question becomes: What do you do? Koolhaas doesn’t help us here. To him, architecture has nowhere left to go except sideways “like a crab on LSD”.</p>
<p>Hal Foster gives us a partial response<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fn8" id="fnref8">8</a></sup>, writing in 2013:</p>
<blockquote>
<p>In times of transition artists have played critically with capitalist junk. In his manifold practice <strong>Merz</strong>, Kurt Schwitters turned bits of rubbish in post-World War I Germany — fragments of advertisements, cashiered tickets, odd items stolen from friends — into the stuff of collages and constructions. […] Other examples in this vein include the “Bunk” collages producd by Paolozzi out of American glossies in post-World War II England, as well as installations […] staged by Claes Oldenburg in <strong>The Street</strong> and <strong>The Store</strong> in the early 1960s. In the present, too, artists such as Isa Genzken, Thomas Hirschhorn, and Rachel Harrison excel in this practice of mimetic exacerbation. If there is no other side to Junkspace, indeed no outside at all, they are still able to find fissures within this world, to pressure these cracks, and open up a little running room.</p>
</blockquote>
<p>In other words: If you can't dismantle Junkspace in the immediate term, you should subvert it. Find an opening somewhere, and carve out a space that is everything Junkspace is not: Dirty, uncontrolled, transgressive, economically useless. You build real communities within and without the ones imagined by the marketing department. Letting your friends in through the back door is an important act of architectural subversion, as is solidarity with the nightshift <sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fn9" id="fnref9">9</a></sup>.</p>
<p>But the more hopeful point is this: Universities have the potential to be islands of crucial public space, even as consumerism turns the surrounding landscape into junk. Imagine what Garden House could be if tuition was free, cleaners, teachers, and technicians were on secure, long-term contracts, immigrants didn't have to fear deportation, and universities were funded such that they could build appropriate buildings without squeezing students for petty change at every turn. Perhaps we could dispense with the prison-style visitation system currently in place, and make the University once again a part of public space, and open to everyone<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fn10" id="fnref10">10</a></sup>. That world needs to be our ultimate goal <sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fn11" id="fnref11">11</a></sup>.</p>
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>Rem Koolhaas (2001): <em>Junkspace.</em> In <em>The Harvard Design School Guide to Shopping</em>, Taschen. Available at <a href="http://www.cavvia.net/junkspace/">cavvia.net/junkspace</a> <a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fnref1" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn2" class="footnote-item"><p>BBC (2004): <em>BBC Media Village White City</em>. Press Release. Available at [bbc.co.uk/pressoffice/pressreleases/stories/2004/05_may/11/media_village.pdf](http://www.bbc.co.uk/pressoffice/pressreleases/stories/2004/05_may/11/media_village.pdf <a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fnref2" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn3" class="footnote-item"><p>Rem Koolhaas (1978): <em>Delirious New York.</em>, Oxford University Press <a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fnref3" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn4" class="footnote-item"><p>I assume those came when <em>BBC Media Village</em> was sold off to private developers in 2015. They quickly rebranded it to <em>White City Place</em>, a title so normcore it’s frankly impressive. BBC (2015): <em>Media Centre, London: first in, last out</em>. Available at <a href="https://www.bbc.co.uk/blogs/aboutthebbc/entries/abe09136-ed47-4083-b35d-03473ecf8e8e">bbc.co.uk/blogs/aboutthebbc/entries/abe09136-ed47-4083-b35d-03473ecf8e8e</a> <a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fnref4" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn5" class="footnote-item"><p>Royal College of Art, Central Saint Martins, Camberwell College of Arts, London College of Communication, London Heathrow, London Gatwick <a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fnref5" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn6" class="footnote-item"><p>The next generation of temporary is around the corner: <em>Troubadour Theatre</em> is literally built from scaffolding and tarp. <a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fnref6" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn7" class="footnote-item"><p>The notion of making everything temporary for the worker, flexible for the boss is part of a bigger economic trend. See: David Banks (2019): <em>Against We</em>. Commune Magazine, available at <a href="https://communemag.com/against-we/">communemag.com/against-we/</a> <a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fnref7" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn8" class="footnote-item"><p>Hal Foster (2013): <em>Running Room.</em> In <em>Junkspace with Running Room,</em> Notting Hill Editions. <a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fnref8" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn9" class="footnote-item"><p>Sally Weale (2019): <em>UCL workers to decide on strike action over “unjust” outsourcing.</em> The Guardian, available at <a href="https://www.theguardian.com/education/2019/oct/09/ucl-workers-to-decide-on-strike-action-over-unjust-outsourcing">theguardian.com/education/2019/oct/09/ucl-workers-to-decide-on-strike-action-over-unjust-outsourcing</a> <a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fnref9" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn10" class="footnote-item"><p>I’m lifting the properties of “public space” and the caption to the final image from the architect Wim Cuyvers. His text <em>Public Space</em> (Undated) is available at <a href="https://www.readingdesign.org/public-space">readingdesign.org/public-space</a> <a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fnref10" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn11" class="footnote-item"><p>Roland Ross contributed notes to this piece. It appeared first in <a href="https://www.instagram.com/p/B89rW1aB1E_/">Content Full</a> Issue 1. <a href="https://maxkohler.com/posts/2020-02-29-junk-city/#fnref11" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
Zoom Zoom Zoom2020-03-31T14:56:15Zhttps://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/<figure class="post-figure big">
<img alt="Illustration showing a greyscale, blurred screenshot of a Zoom conversation" loading="lazy" src="https://maxkohler.com/assets/zoomzoom.jpg" />
<figcaption>
</figcaption>
</figure>
<p>I recently wrote about the privatisation of university buildings<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/#fn1" id="fnref1">1</a></sup> and how that’s a Bad Thing™.</p>
<p>Between the time I enrolled and graduated at Camberwell College of Arts, the building went from being largely open to the public to a situation where access to the building, and movement inside it is tightly controlled by access cards, remote-controlled gates, cameras, visitors lists, fencing, and private security guards. The Royal College was already in a similar state when I got there (plus the private landlord adding their own power mechanisms to the pile in the form of inspections, defensive architecture, and security more security guards).</p>
<p>The move to online teaching over the last few days is a dramatic escalation of this same movement. While our department heads are doing their best to get everyone onto Zoom calls or Hangouts as soon as possible, let’s remember what these apps are: entirely private, venture-funded<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/#fn2" id="fnref2">2</a></sup>, data-collecting, exclusive to anyone without high-end internet, for-profit spaces designed to reproduce the hierarchies of business meetings. Access is finely graded and revokable at any time if you stop paying your subscription fees or otherwise become a nuiscance to the institution.</p>
<p>Zoom, which has somehow become the default choice for online teaching, embodies all of these attributes. It’s a particularly good example of how institutional power structures are hard-coded into these apps, beginning with who gets to control the knowledge about what’s happening on the platform:</p>
<blockquote>
<p>Zoom allows administrators to see detailed views on how, when, and where users are using Zoom, with detailed dashboards in real-time of user activity. Zoom also provides a ranking system of users based on total number of meeting minutes. If a user records any calls via Zoom, administrators can access the contents of that recorded call, including video, audio, transcript, and chat files, as well as access to sharing, analytics, and cloud management privileges.<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/#fn3" id="fnref3">3</a></sup></p>
</blockquote>
<p>Nothing escapes the administrative gaze. Further, it can look beyond what happens in any given Zoom meeting and reach into whatever physical space we happen to be calling in from:</p>
<blockquote>
<p>For any meeting that has occurred or is in-process, Zoom allows administrators to see the operating system, IP address, location data, and device information of each participant. This device information includes the type of machine, specs on the make/model of your peripheral audiovisual devices like cameras or speakers, and names for those devices (for example, the user-configurable names given to AirPods). Administrators also have the ability to join any call at any time on their organization’s instance of Zoom, without in-the-moment consent or warning for the attendees of the call.</p>
</blockquote>
<p>With shared workspaces shuttered and everyone forced to connect from home, this data now illuminates formerly intimate spaces: my IP address no longer bounces between coffee shops, university, and the library, but invariably points to my house, and my device information now describes hardware I keep in my bedroom.</p>
<p>But the most dystopian Zoom feature of all has to be <em>Attendee Attention Tracking</em>. In a 2018 article the company describes the feature and its potential use in schools as follows:</p>
<blockquote>
<p>Cool feature alert! Attendee Attention Tracking in Zoom can help you monitor your students’ attention to your shared presentation. Whether it’s a video, a powerpoint, or your desktop, if Zoom is not the app in focus on a student’s computer you will see a clock indicator next to their name in the Participant box […] It may also be helpful to let your students know that you will be grading this metric. In the virtual classroom, anything you can do as educators to facilitate engagement and attention will translate to continued success in the classroom.<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/#fn4" id="fnref4">4</a></sup></p>
</blockquote>
<p>Again, this information only flows upward, toward the administration (and in semi-anonymised form to Zoom and its advertising partners <sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/#fn5" id="fnref5">5</a></sup>. Attendees (remember when we were students) are seen, but can’t themselves see beyond whatever material the organisation has made available.</p>
<figure class="post-figure post">
<img alt="Complex flow diagram" loading="lazy" src="https://maxkohler.com/assets/US08913103-20141216-D00000.png" />
<figcaption>
<span class="figure__caption">
<p>Google, 2012: <em>Method and apparatus for focus-of-attention control</em> (Patent Drawing)</p>
</span>
<span class="figure__source">
<p><a href="https://patents.google.com/patent/US8913103B1">patents.google.com/patent/US8913103B1</a></p>
</span>
</figcaption>
</figure>
<p>Management at the Royal College have announced they will be rolling out Zoom to all students, but it is unclear which of its administrative features they’re planning to use, what information they’re storing, and for what purpose. But even if they didn’t use any and stored nothing, the fact that these control mechanisms are built into the platform to be turned on at any moment without any real option to dissent, is damaging enough. You can’t practice institutional critique when the institution is sitting on a giant mute button<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/#fn6" id="fnref6">6</a></sup>.</p>
<p>Not that this kind of privatised software-space is new to education. Other ed-tech junk like Edublogs, Moodle, Panopto (<em>nice</em>), Connect2, G-Suite, and “The Intranet” fit largely similar descriptions. But under social distancing, these apps have become impossible to avoid.</p>
<p>This is why it is now more necessary than ever to interrogate these virtual spaces as we would the classroom and, as Hal Foster puts it, “find fissures within this world, to pressure these cracks, and open up a little running room”<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/#fn7" id="fnref7">7</a></sup>.</p>
<p>Some of this has already started: public Whatsapp groups provide an unofficial commentary track to most official Zoom meetings. Multiple programs at the RCA have taken over their formerly marketing-oriented Instagram accounts and are using them to critique the institution. Official emails are screenshotted and discussed in cross-institutional Slack channels. This isn’t to say that these platforms don’t have their own problems, but by creating our own spaces within them, we’re at least avoiding the most immediate level of administrative control. The next step is to think about how we want this online-learning thing to work, and building our own systems (not necessarily technological) to support that. <sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/#fn8" id="fnref8">8</a></sup></p>
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p><a href="http://maxkohler.com/2020/junk-city">maxkohler.com/2020/junk-city</a> <a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/#fnref1" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn2" class="footnote-item"><p>Crunchbase, <em>Zoom Video Communications</em>. Available at <a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/crunchbase.com/organization/zoom-video-communications">crunchbase.com/organization/zoom-video-communications</a> <a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/#fnref2" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn3" class="footnote-item"><p>Electronic Frontier Foundation, 2020: <em>What You Should Know About Online Tools During the COVID-19 Crisis</em>. Available at <a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/eff.org/deeplinks/2020/03/what-you-should-know-about-online-tools-during-covid-19-crisis">eff.org/deeplinks/2020/03/what-you-should-know-about-online-tools-during-covid-19-crisis</a> <a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/#fnref3" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn4" class="footnote-item"><p>Zoom Blog, 2018: <em>Zoom Tips For Educators: Attendee Attention Tracking</em>. Available at <a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/blog.zoom.us/wordpress/2018/01/26/zoom-tips-for-educators-attendee-attention-tracking/">blog.zoom.us/wordpress/2018/01/26/zoom-tips-for-educators-attendee-attention-tracking/</a> <a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/#fnref4" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn5" class="footnote-item"><p>Vice, 2020: <em>Zoom iOS App Sends Data to Facebook Even if You Don’t Have a Facebook Account</em>. Available at <a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/vice.com/en_us/article/k7e599/zoom-ios-app-sends-data-to-facebook-even-if-you-dont-have-a-facebook-account">vice.com/en_us/article/k7e599/zoom-ios-app-sends-data-to-facebook-even-if-you-dont-have-a-facebook-account</a> <a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/#fnref5" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn6" class="footnote-item"><p>It took the Royal College of Art exactly two weeks to use that mute button for union busting. <a href="https://twitter.com/RcaUcu/status/1225471357275770883">twitter.com/RcaUcu/status/1225471357275770883</a> <a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/#fnref6" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn7" class="footnote-item"><p>Hal Foster, 2013: <em>Running Room</em>. In: <em>Junkspace with Running Room</em>, Notting Hill Editions. <a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/#fnref7" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn8" class="footnote-item"><p>This was first published in <a href="http://content-free.net/articles/zoom-zoom-zoom">Content Free</a> <a href="https://maxkohler.com/posts/2020-03-03-zoom-zoom-zoom/#fnref8" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
Here’s Jony2020-05-24T14:56:15Zhttps://maxkohler.com/posts/2020-05-09-here-is-jony/<p>Wrote down some notes on Jony Ive's Zoom-Seminar at the RCA with Roland Ross. The short version is that it was pretty bizarre. Read the long version on <a href="http://content-free.net/articles/here-comes-jony">Content Free</a>.</p>
The Business of Design2020-05-24T14:56:15Zhttps://maxkohler.com/posts/2020-05-24-business-of-design/<p>Notes from Jaguar-Landrover's design director Gerry McGovern's lecture on the RCA Zoom. This was basically an hour-long fanfic about a world in which climate change doesn't exist and thousands of people don't get killed by SUVs every year. Roland Ross in black, me in green. <a href="http://content-free.net/articles/the-business-of-design">Read on Content Free</a>.</p>
How to deploy a Wordpress Site using Github Actions2020-08-05T00:00:00Zhttps://maxkohler.com/posts/2020-08-05-github-actions-wordpress/<p>I started using <a href="https://www.netlify.com/">Netlify</a> a few weeks ago, and I've already gotten very used to the workflow it enables you to have:</p>
<ol>
<li>You work on a local copy of your website</li>
<li>You push changes to Github</li>
<li>Netlify notices you made a change</li>
<li>It makes a fresh clone of the repository</li>
<li>It runs whatever build process you set up</li>
<li>It takes the result of that build process and deploys to the web</li>
</ol>
<p>This workflow feels so good to me that I want to have it on every one of my projects – including the few Wordpress websites I work on. You can't just throw a Wordpress site onto Netlify (unless you go the headless CMS route, but that's a different story), but you can still have the nice workflow by leveraging Github's own build system: <a href="https://github.com/features/actions">Github Actions</a>.</p>
<h2 id="github-actions">Github Actions</h2>
<p>While Netlify's build process feels very much designed to <em>build and deploy the website when you push to the repository</em>, Github actions can pretty much <em>do anything you want on any event that can happen in a git repository</em>. That's powerful, but also means that they need a little more configuration.</p>
<p>You set up an Action (or <em>Workflow</em> – the terminology is a little confusing there) by creating a YAML file in a special folder called <code>.github/workflows</code> at the root of your project repository.</p>
<p>Mine looks like this <sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-08-05-github-actions-wordpress/#fn1" id="fnref1">1</a></sup>:</p>
<pre class="language-yaml"><code class="language-yaml"><span class="highlight-line"><span class="token key atrule">name</span><span class="token punctuation">:</span> CI</span>
<mark class="highlight-line highlight-line-active"><span class="token key atrule">on</span><span class="token punctuation">:</span></mark>
<mark class="highlight-line highlight-line-active"> <span class="token key atrule">push</span><span class="token punctuation">:</span></mark>
<mark class="highlight-line highlight-line-active"> <span class="token key atrule">branches</span><span class="token punctuation">:</span> <span class="token punctuation">[</span>main<span class="token punctuation">]</span></mark>
<span class="highlight-line"></span>
<span class="highlight-line"><span class="token key atrule">jobs</span><span class="token punctuation">:</span></span>
<span class="highlight-line"> <span class="token key atrule">deploy</span><span class="token punctuation">:</span></span>
<span class="highlight-line"> <span class="token key atrule">runs-on</span><span class="token punctuation">:</span> ubuntu<span class="token punctuation">-</span>latest</span>
<mark class="highlight-line highlight-line-active"> <span class="token key atrule">steps</span><span class="token punctuation">:</span></mark>
<mark class="highlight-line highlight-line-active"> <span class="token punctuation">-</span> <span class="token key atrule">uses</span><span class="token punctuation">:</span> actions/checkout@v2</mark>
<mark class="highlight-line highlight-line-active"> <span class="token punctuation">-</span> <span class="token key atrule">name</span><span class="token punctuation">:</span> Install dependencies</mark>
<mark class="highlight-line highlight-line-active"> <span class="token key atrule">run</span><span class="token punctuation">:</span> yarn install</mark>
<mark class="highlight-line highlight-line-active"> <span class="token punctuation">-</span> <span class="token key atrule">name</span><span class="token punctuation">:</span> Run build command</mark>
<mark class="highlight-line highlight-line-active"> <span class="token key atrule">run</span><span class="token punctuation">:</span> yarn build</mark>
<mark class="highlight-line highlight-line-active"> <span class="token punctuation">-</span> <span class="token key atrule">name</span><span class="token punctuation">:</span> Deploy via FTP</mark>
<mark class="highlight-line highlight-line-active"> <span class="token key atrule">run</span><span class="token punctuation">:</span> yarn deploy</mark>
<span class="highlight-line"> <span class="token key atrule">env</span><span class="token punctuation">:</span></span>
<span class="highlight-line"> <span class="token key atrule">NODE_ENV</span><span class="token punctuation">:</span> production</span>
<span class="highlight-line"> <span class="token key atrule">FTP_HOST</span><span class="token punctuation">:</span> $<span class="token punctuation">{</span><span class="token punctuation">{</span> secrets.FTP_HOST <span class="token punctuation">}</span><span class="token punctuation">}</span></span>
<span class="highlight-line"> <span class="token key atrule">FTP_USER</span><span class="token punctuation">:</span> $<span class="token punctuation">{</span><span class="token punctuation">{</span> secrets.FTP_USER <span class="token punctuation">}</span><span class="token punctuation">}</span></span>
<span class="highlight-line"> <span class="token key atrule">FTP_PASSWORD</span><span class="token punctuation">:</span> $<span class="token punctuation">{</span><span class="token punctuation">{</span> secrets.FTP_PASSWORD <span class="token punctuation">}</span><span class="token punctuation">}</span></span></code></pre>
<p>The <code>on</code> key at the top of the file tells Github when to run the workflow. In this case, that's whenever I <code>push</code> to the <code>main</code> branch.</p>
<p>Then you describe what work you want the workflow to do. Per <a href="https://docs.github.com/en/actions/configuring-and-managing-workflows/configuring-a-workflow">Github's documentation</a>:</p>
<blockquote>
<p>Workflows must have at least one job, and jobs contain a set of steps that perform individual tasks. Steps can run commands or use an action. You can create your own actions or use actions shared by the GitHub community and customize them as needed.</p>
</blockquote>
<p>My workflow here has one job called <code>deploy</code> with four steps:</p>
<ol>
<li><code>actions/checkout@v2</code> is an action <a href="https://github.com/marketplace/actions/checkout">written by Github itself</a> that downloads a fresh copy of your repository.</li>
<li><code>Install dependencies</code> runs <code>yarn install</code> which pulls down the dependencies I've listed in my <code>package.json</code> file.</li>
<li><code>Run build command</code> triggers <code>yarn run build</code>, which in turn is pointed at a gulp task that does the actual work of compiling my Sass, packaging my Javascript and whatever else I need to do<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-08-05-github-actions-wordpress/#fn2" id="fnref2">2</a></sup>.</li>
<li><code>Deploy via FTP</code> runs <code>yarn run deploy</code>, which is pointed at <a href="https://www.npmjs.com/package/vinyl-ftp">another gulp task</a> that uploads the contents of the repository (including the files we just built) to the server my Wordpress site lives on.</li>
</ol>
<h2 id="secrets">Secrets</h2>
<p>The last step is interesting: How does the gulp task know how to FTP into my server? I certainly don't want to put my login credentials into my repository, but how else could I tell my build process about them? Turns out Github has a mechanism called <a href="https://docs.github.com/en/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets">secrets</a> that's designed just for this purpose.</p>
<p>Instead of storing the secrets inside your repository (which, again, <em>terrible idea</em>), you go into your repository's settings on Github and enter them there, where they're stored safely and well-encrypted. The interface looks like this:</p>
<p><img src="https://maxkohler.com/assets/gh-secrets.png" alt="Screenshot showing github secrets interface" /></p>
<p>Then, you can acess those secrets during your workflows by adding them as environment variables to individual <em>steps</em> – that's what the <code>env</code> property in my YAML file does:</p>
<pre class="language-yaml"><code class="language-yaml"><span class="token key atrule">env</span><span class="token punctuation">:</span>
<span class="token key atrule">NODE_ENV</span><span class="token punctuation">:</span> production
<span class="token key atrule">FTP_HOST</span><span class="token punctuation">:</span> $<span class="token punctuation">{</span><span class="token punctuation">{</span> secrets.FTP_HOST <span class="token punctuation">}</span><span class="token punctuation">}</span>
<span class="token key atrule">FTP_USER</span><span class="token punctuation">:</span> $<span class="token punctuation">{</span><span class="token punctuation">{</span> secrets.FTP_USER <span class="token punctuation">}</span><span class="token punctuation">}</span>
<span class="token key atrule">FTP_PASSWORD</span><span class="token punctuation">:</span> $<span class="token punctuation">{</span><span class="token punctuation">{</span> secrets.FTP_PASSWORD <span class="token punctuation">}</span><span class="token punctuation">}</span></code></pre>
<p>Those environment variables make it all the way down into my gulpfile, where I access them using <code>process.env</code>:</p>
<pre class="language-js"><code class="language-js"><span class="token keyword">function</span> <span class="token function">deploy</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token punctuation">{</span>
<span class="token keyword">const</span> ftpConnection <span class="token operator">=</span> ftp<span class="token punctuation">.</span><span class="token function">create</span><span class="token punctuation">(</span><span class="token punctuation">{</span>
<span class="token literal-property property">host</span><span class="token operator">:</span> process<span class="token punctuation">.</span>env<span class="token punctuation">.</span><span class="token constant">FTP_HOST</span><span class="token punctuation">,</span>
<span class="token literal-property property">user</span><span class="token operator">:</span> process<span class="token punctuation">.</span>env<span class="token punctuation">.</span><span class="token constant">FTP_USER</span><span class="token punctuation">,</span>
<span class="token literal-property property">password</span><span class="token operator">:</span> process<span class="token punctuation">.</span>env<span class="token punctuation">.</span><span class="token constant">FTP_PASSWORD</span><span class="token punctuation">,</span>
<span class="token punctuation">}</span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token comment">// (rest of deployment code omitted)</span>
<span class="token punctuation">}</span></code></pre>
<p>This totally works! Whenever I push to the repository, the action is triggered and I can follow its progress through this nice UI Github gives you:</p>
<p><img src="https://maxkohler.com/assets/gh-action.png" alt="Screenshot of github actions interface" /></p>
<p>My process for working on Wordpress sites now looks like this:</p>
<ol>
<li>I work on a local copy of the site</li>
<li>When I've made a change, I push it to Github</li>
<li>The workflow we just defined checks out the repository</li>
<li>It installs my dependencies and runs my build process</li>
<li>It FTPs into my server and uploads the freshly-built files</li>
</ol>
<p>Just what I set out to do.</p>
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>I’m only dealing with the Wordpress theme here (i.e a single folder), but if you had a more complicated setup (maybe involving custom functions) you could extend this configuration to accomodate for that, too. <a href="https://maxkohler.com/posts/2020-08-05-github-actions-wordpress/#fnref1" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn2" class="footnote-item"><p>I like using <code>yarn run build</code> instead of the actual gulp command here because it means that when I change my build process, I only have to update my <code>package.json</code> file and the Action will still work. It’s also nice not to have to remember a whole bunch of different build commands as you switch between projects – it’s always <code>yarn build</code>. <a href="https://maxkohler.com/posts/2020-08-05-github-actions-wordpress/#fnref2" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
Where Does Logic Go on Jamstack Sites?2020-08-24T00:00:00Zhttps://maxkohler.com/posts/2020-08-24-where-does-logic-go-on-jamstack-sites/<p>I just wrote about that question over on <a href="https://css-tricks.com/where-does-logic-go-on-jamstack-sites/">CSS-Tricks</a>. I go into some detail there, but the argument boils down to this: Even though a basic idea of the Jamstack is that you do as much of your logic as you can during your build process (earlier in the programme lifecycle, if you will), you still have options. Namely:</p>
<ul>
<li>Do the logic in your head and write down the results</li>
<li>Move it into the build process</li>
<li>Put it into an edge worker</li>
<li>Do it in Javascript on the user's device after they've loaded the site.</li>
</ul>
<p>Recently (like on the <a href="https://maxkohler.com/work/camberwell-2020/">Wish you were here</a> site), I tend to do a combination of all of the above.</p>
<p>Again, you can <a href="https://css-tricks.com/where-does-logic-go-on-jamstack-sites/">read the full piece on CSS-Tricks</a>.</p>
Just enough CMS2020-08-29T00:00:00Zhttps://maxkohler.com/posts/2020-08-29-just-enough-cms/<p>If the answer is yes, I'm creating a lot of work for myself <em>now</em>: I'll have to spin up some local development environment, think up a data structure, and write a bunch of templates. But <em>later</em> the CMS will make it really easy to update the site, so hopefully I'll make up the time I spent setting it up.</p>
<p>If the answer is no, I'm making the opposite bargain. I'm making my life easy <em>now</em> by writing straight-up HTML files, but if I ever need to update the site <em>later</em> it'll be a bunch of effort.</p>
<p>Something that's been helping me recently is to think about this not as a yes/no question, but as one of degrees. Your website might need <em>a little bit of CMS</em>, or <em>a whole bunch</em>. Your needs might change over time, too. Thankfully there's all kinds of technologies that give you those gradations.</p>
<h2 id="data-files">Data files</h2>
<p>Your data is still in your git repository, but you've moved it from your HTML into something like a <code>.yaml</code> file that's easier to write. Then you have some build process that combines the information from that file with a template and produced the final HTML. This is exactly how I'm writing this blog post: A markdown file that's compiled to HTML by <a href="https://www.11ty.dev/">Eleventy</a>.</p>
<h2 id="headless">Headless</h2>
<p>There's a bunch of different levels of involvement here. At the low end is something like a shared Google spreadsheet that you export to CSV and throw your build process (totally valid thing to do).</p>
<p>Then there's things like Contentful and Sanity that follow a "bring your own schema" model, ie. they <em>only</em> deal with the data you specify.</p>
<p>Even higher up the scale are things like Ghost, or even Wordpress (through the <a href="https://developer.wordpress.org/rest-api/">REST API</a>) that start to make some assumptions about how you might want to structure your content. They probably have some built-in schemas for things like articles and comments.</p>
<p>Either way, these tools give you some useful powers:</p>
<ul>
<li>Fancy editing UI</li>
<li>User management</li>
<li>Some way to deal with assets, especially images.</li>
</ul>
<h2 id="full-on-cms">Full-on CMS</h2>
<p>To me, these are products like Wordpress, Shopify, or Tumblr. Big, sohpisticated programs that run on your server and handle both the authoring and the delivery of your website.</p>
<p>You get all the features of the headless CMS, plus some selection of other <em>stuff</em>. A big one for me is e-commerce, but there's also things like commenting, scheduling, sharing, a plugin system, and sometimes in-built distribution systems.</p>
<hr />
<p>Lately I've been enjoying working on the lower end of this scale. There's been a few cases where I've had to <a href="https://maxkohler.com/work/camberwell-2020/">move up the scale mid-project</a>, but that turned out to be easier than I had thought. I imagine going the other way would be much harder.</p>
Cloud Visions2020-09-27T00:00:00Zhttps://maxkohler.com/posts/2020-09-27-cloud-visions/<figure class="post-figure post">
<img alt="Blurry photograph of people on beachfront" loading="lazy" src="https://maxkohler.com/assets/cloud.jpg" />
<figcaption>
<span class="figure__caption">
<p>Cropped version of an image originally published in <em>The Mail Online</em> with the caption: <em>Thousands of Britons ignored repeated warnings to stay home as part of ongoing efforts to clamp down on coronavirus by heading to DIY stores, parks and beaches on Sunday.</em></p>
</span>
<span class="figure__source">
<p><a href="https://www.dailymail.co.uk/news/article-8258383/More-people-leave-homes-flock-DIY-stores-parks-despite-Covid-19-lockdown.html">Source</a></p>
</span>
</figcaption>
</figure>
<p>In late March, some weeks into the pandemic, a new genre of local news story emerged: Local park Crowded with People Despite Social Distancing Orders. These stories usually struck a similar tone, describing how, while most people where behaving responsibly and staying indoors, a minority decided to flout the clear guidelines set out by the local government and gathered outdoors.</p>
<p>With these stories came a particular set of images, showing sunny paths and streets filled with seemingly oblivious people moving around in dangerous proximity. The real subject of these images is what isn’t there: distance between the bodies. Whatever empty space does remain visible in the image is being encroached from all sides by people walking, running, and cycling, bleeding in and out of focus.</p>
<p>Soon after these images appeared, people began to question how well they reflected the reality on the ground. Their composition seemed too similar (we’re always looking along the path from an elevated position, never across), and their optical artifacts (the condensed perspective and narrow depth of field of a long telephoto lens) too pronounced to be accidental.</p>
<p>Even as Twitter users advanced this point by comparing the media photographs with satellite images of the area<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-09-27-cloud-visions/#fn1" id="fnref1">1</a></sup>, the answer to the original question — are the people in the image keeping to the two metre distance? — remained elusive. All we have is a distorted slice of reality, blurred not only by the telephoto lens, but also the shimmering summer air, the JPEG algorithm, and the conflicting narratives surrounding them.</p>
<p>But suppose we had a some way of accurately measuring the distance between people in these images: What good would that information be, anyway? The two-metre line doesn’t represent a physical boundary (airborne particles don’t suddenly stop once they reach it) but a statistical one: at two meters, your risk of infection is low enough to improve public health. The real droplet-cloud has no boundary: you and me feed it everytime we exhale, it interacts with the built environment in complex ways, phasing in and out of existence. You’re always already enveloped by it. The cloud is not only physical but also also epistemological, spreading maddening uncertainty wherever the wind blows it.<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-09-27-cloud-visions/#fn2" id="fnref2">2</a></sup></p>
<p>As an illustration of this condition, the blurred, distorted, compressed images of parks and seafront parks take on new meaning. In them, people, the heated athmosphere, and the built environment melt together into a single, ever-moving, amorphous body – this is the cloud, made visible.</p>
<p>The cloud is a terrifying entitity, escaping our attempts at classyfing and understanding it since the beginning of such efforts in the 18th century. But according to the architect Eyal Weizman, the ephemeral nature of the cloud also forms the basis of its civic potential. Because the cloud doesn’t stop at national borders or the threshold of buildings, it has the potential to create a political space that equally reaches across existing divisions. Everyone who is enveloped by the cloud becomes an inhabitant of this new space: a citizen of the cloud.</p>
<p>Read from this perspective, the images of crowded bodies in bright sunlight loose nothing of their subtle terror. But perhaps in their blurryness, a glimmer of hope for a new, more inclusive political community may be found. <sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-09-27-cloud-visions/#fn3" id="fnref3">3</a></sup></p>
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>Joey D’Urso (2020), <em>Here’s Why Some Pictures Of People Supposedly Breaking Coronavirus Social Distancing Rules Can Be Misleading</em>. In Buzzfeed News, available at <a href="https://www.buzzfeed.com/joeydurso/coronavirus-social-distancing-lockdown-photos">buzzfeed.com/joeydurso/coronavirus-social-distancing-lockdown-photos</a> <a href="https://maxkohler.com/posts/2020-09-27-cloud-visions/#fnref1" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn2" class="footnote-item"><p>Eyal Weizman (2017), <em>Forensic Architecture: Violence at the Threshold of Detectability</em>, p 193. Zone Books. <a href="https://maxkohler.com/posts/2020-09-27-cloud-visions/#fnref2" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn3" class="footnote-item"><p>This text is also published on <a href="https://medium.com/@maxakohler/cloud-visions-cccf42ef8447">Medium</a>. An extended version is forthcoming in <em>Content Full</em> <a href="https://maxkohler.com/posts/2020-09-27-cloud-visions/#fnref3" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
In loving memory of degree shows2020-09-28T00:00:00Zhttps://maxkohler.com/posts/2020-09-28-in-loving-memory-of-degree-shows/<p><span class="leadin">The degree show</span> is a staple of art school life in Britain and the United States. Held at the end of the summer term, they’re a an opportunity for graduates to develop their work in a high-stakes exhibition environment, celebrate the time spent together, and speak to a wider public. Since indoor events are still off-limits in most places, this year’s degree shows have largely moved online.</p>
<p>Looking through these shows (<em>Lecture in Progress</em> keeps a helpful list <sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-09-28-in-loving-memory-of-degree-shows/#fn1" id="fnref1">1</a></sup>, you quickly notice that most of them follow the same pattern: We land on a list of student’s names, sometimes led by an opening statement from the course leader or chancellor (in video format if you’re particularly unlucky). Each name links to a page that contains information about the student: Their name, a statement introducing themselves and their work, a list of links to their social media profiles, followed by one or more projects represented by some combination of images, video, and text. While the execution of this varies, the structure is largely consistent across dozens of shows from Britain, Europe, and the United States.</p>
<p>This leads to an obvious question: Why is it that all of these art schools independently came to the conclusion that their degree show should not only be replaced by a website (overruling student protests at the Royal College of Art<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-09-28-in-loving-memory-of-degree-shows/#fn2" id="fnref2">2</a></sup> and other institutions<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-09-28-in-loving-memory-of-degree-shows/#fn3" id="fnref3">3</a></sup>), but one that follows the same structure across the board? This collective falling-in-line happened remarkably fast — as recently as March, the question of how the degree show should be adapted to the pandemic environment still seemed wide open.<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-09-28-in-loving-memory-of-degree-shows/#fn4" id="fnref4">4</a></sup></p>
<p>Somehow, the online degree show became the <em>obvious choice</em> basically overnight. This might be partly explained on technical grounds (we had our laptops open anyway), but it’s worth thinking through how pre-existing insititutional circumstances may have contributed, too.</p>
<h2 id="administration">Administration</h2>
<p>The artistic work that usually happens in the leadup to a group show — the haggling around wall space, the writing of exhibition texts, the production of printed matter, the development of the work itself — usually happens almost exclusively between students and teachers. Students are encouraged (or at least can’t be easily prevented) to take control of the exhibition space, make their own measurements and come to independent decisions. Save for an electrical inspection and a handful of VIP events (which are tolerated), the college administration is kept at arm’s length.</p>
<p>In the online degree show, this relationship is reversed. A sprawling network of administrative departments (IT, Marketing, Health and Safety, Alumni Relations, Chancellor’s Office) supported by external consultants and software developers takes control of most aspects of the show. Digital platforms give administrators sophisticated tools to finely grade or outright deny access to the exhibition space, reducing students and teachers to submitting questions and hoping they will be <em>brought up</em> to the relavant comittee meeting. On my course at the Royal College of Art, this lack of visibility was so egregious that the entire group of student curators resigned a few weeks into the planning process.</p>
<p>Following a year of widespread strikes, protests, and criticism levelled against university management, isn’t surprising that administrators everywhere would push through a degree show format that shores up their position and minimises the possibility of public dissent.</p>
<h2 id="content">Content</h2>
<p>Of all the administrative departments, Marketing might be the biggest winner in the move to virtual degree shows. Marketing departments have long mined degree shows for content by interviewing graduating students, asking them to write for the institution’s website, staging Instagram takeovers, and comissioning photography of the exhibition to be used in next year’s catalogue. The online degree show makes this work much easier: Here is all this years’ work, already photographed and written-about in digestible chunks to which the university has indefinite usage rights — ready to be recycled, curated, promoted <em>across our channels</em> forever. In this turn toward <em>content</em>, the move to an online show mirrors the move to online teaching.<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-09-28-in-loving-memory-of-degree-shows/#fn5" id="fnref5">5</a></sup></p>
<p>Faced with a devastating drop in admissions in the autumn, these marketing activities have become more critical to the institution than ever. An online degree show following the list/detail model fits them, offering a searchable, well-organised database of all the available content, ready to be fed into upcoming recruitment campaigns.</p>
<p>It’s hard to imagine this didn’t inform the administrator’s nearly uniform response — their jobs probably depend on it.</p>
<h2 id="the-new-art-student">The new art student</h2>
<p>In a recent essay, the Oslo-based artist Ane Hjort Guttu writes about the decline of the old notion of the art student as “a somewhat inarticulate individual who achieved insight through their singular, introvert practice and whose main workplace was the studio”, and to whom “health and safety protocols, due notification procedures” and, to read between the lines, <em>employability,</em> were entirely irrelevant. In the new market-driven art school, this old, crumpled figure is replaced by a new ideal:</p>
<blockquote>
<p>[…] the project manager – a team leader of a research network, for example. This ideal person does not need a personal workspace, but can work quite happily in open-plan offices, formulating project descriptions in collaboration with research clusters throughout the European Union. He/she is at the forefront as far as specialised technology is concerned, but also very open towards working across different academic disciplines – if not in practice, then at least in theory. He/she likes to eat in the canteen, is good with digital platforms, announces his/her need for a conference room well in advance, does not spill things, and does not make a mess. He or she goes home at 17:00.<sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-09-28-in-loving-memory-of-degree-shows/#fn6" id="fnref6">6</a></sup></p>
</blockquote>
<p>Guttu traces this development in physical architecture of contemporary art schools, but her analysis extends easily to the virtual architecture of the typical online degree show, which is designed for the same project-manager-student. They have well-lit, nudity-free reproductions of their work in web-friendly formats readily at hand, and have no trouble turning out a concise summary of themselves and their <em>research interests</em>. Their social media profiles are up-to date, <em>professional</em> and ready to be listed in the contact section of their profile. When this figure is the unquestioned ideal, the design decisions flowing into an online degree show do indeed become obvious.</p>
<hr />
<p>The in-person degree show is a confusing, stubbornly local, deeply unprofitable, often inward-looking and at times radical event. I’m not arguing that this can’t be achieved in an online format - examples like Liverpool’s <em>Degree Show on Mars</em><sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-09-28-in-loving-memory-of-degree-shows/#fn7" id="fnref7">7</a></sup> show that, if you give space to students and teachers to truly engage with the medium, it can be done. But if you open up control in this way, alternative proposals like delayed in-person shows, books, and even the redistribution of the show budget to students become possible, too. For the reasons outlined here, most institutions were unwilling to contemplate those possibilities, and instead opted for a response that’s in line with the ongoing marketisation of art education. <sup class="footnote-ref"><a href="https://maxkohler.com/posts/2020-09-28-in-loving-memory-of-degree-shows/#fn8" id="fnref8">8</a></sup></p>
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p><em>Lecture in Progress Degree Show Listings</em> (2020). Available at <a href="https://degreeshows.lectureinprogress.com/">https://degreeshows.lectureinprogress.com/</a> <a href="https://maxkohler.com/posts/2020-09-28-in-loving-memory-of-degree-shows/#fnref1" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn2" class="footnote-item"><p>Anonymous (2020): <em>How Coronavirus Ate the Art School</em>. In Elephant Magazine. Available at <a href="https://elephant.art/how-coronavirus-ate-the-art-school-royal-college-art-rca-degree-show-education-01042020/">elephant.art/how-coronavirus-ate-the-art-school-royal-college-art-rca-degree-show-education-01042020/</a> <a href="https://maxkohler.com/posts/2020-09-28-in-loving-memory-of-degree-shows/#fnref2" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn3" class="footnote-item"><p>David Batty (2020): <em>Students criticise Royal College of Art’s plan to hold degree show online</em>. In The Guardian, available at <a href="https://www.theguardian.com/education/2020/mar/24/students-criticise-royal-college-of-arts-plan-to-hold-degree-show-online">theguardian.com/education/2020/mar/24/students-criticise-royal-college-of-arts-plan-to-hold-degree-show-online</a> <a href="https://maxkohler.com/posts/2020-09-28-in-loving-memory-of-degree-shows/#fnref3" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn4" class="footnote-item"><p>Gabrielle de la Puente, Zarina Muhammad (2020): <em>My degree show was cancelled – what can I do instead? The White Pube advise.</em> In Dazed, available at <a href="https://www.dazeddigital.com/art-photography/article/48487/1/my-degree-show-was-cancelled-what-can-i-do-instead-the-white-pube-advise">dazeddigital.com/art-photography/article/48487/1/my-degree-show-was-cancelled-what-can-i-do-instead-the-white-pube-advise</a> <a href="https://maxkohler.com/posts/2020-09-28-in-loving-memory-of-degree-shows/#fnref4" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn5" class="footnote-item"><p>Juliet Jacques (2020): <em>The Digital Classroom and the Digital Studio</em>. In Journal of Visual Culture & Harun Farocki Institut 32. Available at <a href="https://www.harun-farocki-institut.org/en/2020/06/26/the-digital-classroom-and-the-digital-studio-journal-of-visual-culture-hafi-32/">harun-farocki-institut.org/en/2020/06/26/the-digital-classroom-and-the-digital-studio-journal-of-visual-culture-hafi-32/</a> <a href="https://maxkohler.com/posts/2020-09-28-in-loving-memory-of-degree-shows/#fnref5" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn6" class="footnote-item"><p>Ane Hjort Guttu (2020): <em>The End of Art Education as We Know It.</em> In Kunstkritikk, available at <a href="https://kunstkritikk.com/the-end-of-art-education-as-we-know-it/">https://kunstkritikk.com/the-end-of-art-education-as-we-know-it/</a> <a href="https://maxkohler.com/posts/2020-09-28-in-loving-memory-of-degree-shows/#fnref6" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn7" class="footnote-item"><p>Liverpool School or Art and Design (2020): Degree Show on Mars. Available at <a href="https://www.degreeshowonmars.com/">degreeshowonmars.com/</a> <a href="https://maxkohler.com/posts/2020-09-28-in-loving-memory-of-degree-shows/#fnref7" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn8" class="footnote-item"><p>This text is also published on <a href="https://medium.com/@maxakohler/in-loving-memory-of-degree-shows-5c73e8cc4aa0">Medium</a> <a href="https://maxkohler.com/posts/2020-09-28-in-loving-memory-of-degree-shows/#fnref8" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
Every Wordpress Editor Block2020-09-28T00:00:00Zhttps://maxkohler.com/posts/2020-10-17-every-wordpress-gutenberg-block/<p>I've been doing some Wordpress theme development recently, and I couldn't find a decent reference for the blocks that come with the <a href="https://developer.wordpress.org/block-editor/developers/">default Wordpress editor</a>. So I'll keep one here.</p>
<h2 id="wordpress-core">Wordpress Core</h2>
<p>I'm pulling these out of <a href="https://github.com/WordPress/gutenberg/tree/master/packages/block-library/src">the Wordpress Source</a></p>
<h3 id="content">Content</h3>
<ul>
<li><code>core/audio</code></li>
<li><code>core/calendar</code></li>
<li><code>core/buttons</code></li>
<li><code>core/button</code> – Only used inside <code>core/buttons/</code></li>
<li><code>core/code</code></li>
<li><code>core/classic</code></li>
<li><code>core/cover</code></li>
<li><code>core/separator</code></li>
<li><code>core/embed</code></li>
<li><code>core/file</code></li>
<li><code>core/gallery</code></li>
<li><code>core/heading</code></li>
<li><code>core/html</code></li>
<li><code>core/image</code></li>
<li><code>core/list</code></li>
<li><code>core/paragraph</code></li>
<li><code>core/preformatted</code></li>
<li><code>core/pullquote</code></li>
<li><code>core/quote</code></li>
<li><code>core/table</code></li>
<li><code>core/verse</code></li>
<li><code>core/video</code></li>
<li><code>core/youtube</code></li>
<li><code>core/facebook</code></li>
<li><code>core/instagram</code></li>
<li><code>core/vimeo</code></li>
</ul>
<h3 id="layout">Layout</h3>
<ul>
<li><code>core/block</code></li>
<li><code>core/columns</code></li>
<li><code>core/column</code> – Only used inside <code>core/columns</code>.</li>
<li><code>core/more</code></li>
</ul>
<h3 id="relational">Relational</h3>
<ul>
<li><code>core/archives</code></li>
<li><code>core/categories</code></li>
<li><code>core/latest-comments</code></li>
<li><code>core/latest-posts</code></li>
<li><code>core/media-text</code></li>
<li><code>core/missing</code></li>
<li><code>core/navigation-link</code></li>
<li><code>core/navigation</code></li>
<li><code>core/nextpage</code></li>
<li><code>core/group</code></li>
</ul>
<h2 id="disabling-blocks-in-the-post-editor">Disabling blocks in the Post Editor</h2>
<p>These identifiers are useful because they let you define a limited set of blocks that will be available in the post editor from your <code>functions.php</code> file.</p>
<pre class="language-php"><code class="language-php"><span class="token keyword">function</span> <span class="token function-definition function">theme_allowed_block_types</span><span class="token punctuation">(</span><span class="token variable">$allowed_block_types</span><span class="token punctuation">)</span><span class="token punctuation">{</span>
<span class="token keyword">return</span> <span class="token keyword">array</span><span class="token punctuation">(</span>
<span class="token string single-quoted-string">'core/paragraph'</span><span class="token punctuation">,</span>
<span class="token string single-quoted-string">'core/heading'</span><span class="token punctuation">,</span>
<span class="token string single-quoted-string">'core/list'</span>
<span class="token comment"># Add more identifiers here</span>
<span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token punctuation">}</span>
<span class="token function">add_filter</span><span class="token punctuation">(</span><span class="token string single-quoted-string">'allowed_block_types'</span><span class="token punctuation">,</span> <span class="token string single-quoted-string">'theme_allowed_block_types'</span><span class="token punctuation">)</span><span class="token punctuation">;</span></code></pre>
<p>In addition to this, I like to disable the default block CSS and style them myself instead. You do that by attaching a function to the <a href="https://developer.wordpress.org/reference/hooks/wp_print_styles/"><code>wp_print_styles</code></a> hook:</p>
<pre class="language-php"><code class="language-php"><span class="token keyword">function</span> <span class="token function-definition function">remove_block_css</span><span class="token punctuation">(</span><span class="token punctuation">)</span>
<span class="token punctuation">{</span>
<span class="token function">wp_deregister_style</span><span class="token punctuation">(</span><span class="token string single-quoted-string">'wp-block-library'</span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token punctuation">}</span>
<span class="token function">add_action</span><span class="token punctuation">(</span><span class="token string single-quoted-string">'wp_print_styles'</span><span class="token punctuation">,</span> <span class="token string single-quoted-string">'remove_block_css'</span><span class="token punctuation">)</span><span class="token punctuation">;</span></code></pre>
Continuous Typography2020-11-22T00:00:00Zhttps://maxkohler.com/posts/continuous-typography/<p>Here are some notes about an idea I've been calling Continuous Typography for the sake of thinking about it. It's a way of thinking about typography in terms of continuous functions, rather than absolute values.</p>
<p>Functions (you'll recall from your maths textbook) produce different results based on one or more input parameters. For example, the function $f(x) = 3x + 2$ will return different results depending on the value of its input parameter $x$.</p>
<p>If we apply this idea to typography, it allows us to make design decisions relative to variable input parameters like screen size, connection speed, user preferences, and so on. This can apply to any environment, but it's especially useful for typesetting on the web.</p>
<p>Note that most of these ideas aren't very original anymore, I'm largely synthesising here for my own understanding. Take a look at the footnotes for the original sources.</p>
<h2 id="the-problem">The Problem</h2>
<p>When you're developing a piece of typography you have to define a series of relationships:</p>
<ul>
<li>The space between letters vs. the space between words, lines, and paragraphs</li>
<li>The size and weight of headlines vs. the body copy</li>
<li>The shape of the text block vs. the shape of page</li>
</ul>
<p>There are all kinds of methods to do this (Bringhurst fills a whole chapter with them in <em>Elements of Typographic Style</em><sup class="footnote-ref"><a href="https://maxkohler.com/posts/continuous-typography/#fn1" id="fnref1">1</a></sup>), but in any case you eventually arrive at a set of values for your measure, type size, weight, spacing and so on that produce whatever visual expression you set out to achieve.</p>
<figure class="post-figure big">
<img alt="A paragraph is set in a serif typeface. Values for measure, font size, line height, etc. are shown in red." loading="lazy" src="https://maxkohler.com/assets/continuous-type/paragraph-static.svg" />
<figcaption>
<span class="figure__caption">
<p>A block of text typeset with absolute values.</p>
</span>
<span class="figure__source">
<p>Sample text from <em>Flexible Typesetting</em> by Tim Brown.</p>
</span>
</figcaption>
</figure>
<p>Take the font size, for instance: We want to set this so it gives the right voice to the piece of writing we're working with, but it also has to be appropriate to the typeface we've chosen, the size of the page, and it should result in a comfortable number of characters per line. Other adjustments follow from it: A change in type size might compel different spacing, a change in weight, hyphenation, and so on.</p>
<p>In print, you tweak these values until you arrive at a set of numbers that produces the visual expression you aimed for. And because you're working with a piece of paper of fixed dimensions and permanent ink, you can be fairly sure that the numbers you've established will stay intact throughout the production process, and land in the reader's hand just how you intended.</p>
<p>But on the web, this method starts to fail. Unlike a paper sheet, the browser window your text will be viewed in is completely variable; it can take on any size and apsect ratio whatsoever. If we set our font size to a fixed number (<code>18px</code>, say), the relationship between it and the browser window will be different on every screen, and unpleasant on most.</p>
<p>And the size is of the browser window isn't the only variable in play: Readers can modify type size and colours through their browser or opertating system, or have your text translated into their own language on the fly. Your choice of typeface might well be overwritten by a user's preference or a failed network request, and even the text itself might change over time.</p>
<p>The traditional guidelines of typography about line-lengths, spacing, harmonies, and so on still apply on the web; it's just that we're now trying to apply them in a context where many of their parameters have become variable. There are ways to lock down some of these parameters - there's a HTML snippet that prevents people from resizing your type, for instance - but that seems to me to run counter to the promise of the medium: that it works for anyone, anywhere.</p>
<p>What we need is a way to make typographic decisions in a way that is relative to all of these variable parameters, but still gives us some control over the resulting visual expression. The construct that lets us do this - generate different outputs depending on a set of inputs with arbitrary granularity - is called a <em>continuous function</em>.</p>
<h2 id="a-continuous-approach">A continuous approach</h2>
<p>Let's think through this by defining a single property of our text block – the font size – as a continuous function. Following the traditional approach, we might define the font size using a CSS declaration like this one:</p>
<pre class="language-css"><code class="language-css"><span class="token selector">p</span> <span class="token punctuation">{</span>
<span class="token property">font-size</span><span class="token punctuation">:</span> 16px<span class="token punctuation">;</span>
<span class="token punctuation">}</span></code></pre>
<p>The <code>16px</code> here is an absolute value. It's going to stay the same regardless of the size of the screen, the reader's preferences, and any other outside parameter. As a result, it might work fine on a tablet but will probably feel a little lost on a big desktop monitor, and uncomfortably large on a phone.</p>
<p>But CSS gives us the tools to define the font size in a way that <em>does</em> respond to outside parameters. For example, we could use the <code>vw</code> unit instead of pixels to define our font size:</p>
<pre class="language-css"><code class="language-css"><span class="token selector">p</span> <span class="token punctuation">{</span>
<span class="token property">font-size</span><span class="token punctuation">:</span> 1vw<span class="token punctuation">;</span>
<span class="token punctuation">}</span></code></pre>
<p>One <code>vw</code> is equal to one percent of the width of the reader's screen. So in the CSS declaration above we're saying: <em>The font size on paragraphs is equal to the width of the reader's screen multiplied by 0.01</em>. That's a continuous function.</p>
<p>It produces a different font size for every screen size it encounters: On a screen that's 1000 pixels wide we get a font size of 10 pixels, a 1500 pixel-wide screen results in a font size of 15 pixels, and so on. Drawn onto a coordinate system, it looks like this:</p>
<figure class="post-figure medium">
<img alt="A linear function is drawn on a coordinate system. X: Screen width, Y: Font size" loading="lazy" src="https://maxkohler.com/assets/continuous-type/function-simple.svg" />
<figcaption>
<span class="figure__caption">
<p>If we define the font size defined as a continous function of the screen width, it forms a line.</p>
</span>
</figcaption>
</figure>
<p>I think this simple drawing represents a big shift in our approach to typography on the web. We're no longer placing a single point on the coordinate system (by defining a single, absolute value), but <em>a line</em> containing an infinite number of points - our typographic intent has become dimensional.</p>
<p>This idea doesn't just apply to font size, but every other aspect of our text block: Measure, letter-, line- and word spacing, indentations, weight, variable font parameters can all be defined as continuous functions of one or more input parameters. The typographer's work becomes the shaping of these functions: How steep are they? Do they have minimum and maximum values? Where are their inflection points? Are they smooth, jagged, symmetrical, cyclical, randomised? How do they relate to each other? By answering these questions one way or another, any desired visual expression can be achieved for every reader.</p>
<p>In the following section we'll look at ways this is already possible in CSS today, and what might yet be to come.</p>
<h2 id="shaping-the-function">Shaping the function</h2>
<h3 id="slope">Slope</h3>
<figure class="post-figure medium">
<img alt="4 linear functions of different slopes are drawn on a coordinate system." loading="lazy" src="https://maxkohler.com/assets/continuous-type/function-slope.svg" />
<figcaption>
<span class="figure__caption">
<p>Different numerical factors produce steeper and shallower curves.</p>
</span>
</figcaption>
</figure>
<p>A basic way to manipulate our function is to define its slope. We do this by multiplying our input variable (<code>1vw</code> in our example) with a different numerical factor:</p>
<pre class="language-css"><code class="language-css"><span class="token selector">p</span> <span class="token punctuation">{</span>
<span class="token property">font-size</span><span class="token punctuation">:</span> 0.5vw<span class="token punctuation">;</span> <span class="token comment">/* This produces a shallow curve */</span>
<span class="token property">font-size</span><span class="token punctuation">:</span> 1vw<span class="token punctuation">;</span>
<span class="token property">font-size</span><span class="token punctuation">:</span> 2vw<span class="token punctuation">;</span>
<span class="token property">font-size</span><span class="token punctuation">:</span> 5vw<span class="token punctuation">;</span> <span class="token comment">/* This produces a steep curve */</span>
<span class="token punctuation">}</span></code></pre>
<p>Bigger numerical factors produce steeper curves. A steeper curve, in this example, causes the font size to change more aggressively with the screen width.</p>
<h3 id="minimum-and-maximum-values">Minimum and maximum values</h3>
<figure class="post-figure medium">
<img alt="3 linear functions with different minimum and maximum values are drawn on a coordinate system." loading="lazy" src="https://maxkohler.com/assets/continuous-type/function-clamp.svg" />
<figcaption>
<span class="figure__caption">
<p>Minimum and maximum values produce flat sections on either side of the slope.</p>
</span>
</figcaption>
</figure>
<p>It's often useful to define minimum and maximum values for a given property. We can do this by using the <code>min()</code> and <code>max()</code> functions in CSS:</p>
<pre class="language-css"><code class="language-css"><span class="token selector">p</span> <span class="token punctuation">{</span>
<span class="token comment">/* max() returns the larger of the two input values,
so this will never dip below 16px */</span>
<span class="token property">font-size</span><span class="token punctuation">:</span> <span class="token function">max</span><span class="token punctuation">(</span>16px<span class="token punctuation">,</span> 2vw<span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token comment">/* min() returns the smaller of the two input values,
so this will never grow beyond 32px */</span>
<span class="token property">font-size</span><span class="token punctuation">:</span> <span class="token function">min</span><span class="token punctuation">(</span>32px<span class="token punctuation">,</span> 2vw<span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token punctuation">}</span></code></pre>
<p>We can also set both minimum and maximum values at the same time using the <code>clamp()</code> function:</p>
<pre class="language-css"><code class="language-css"><span class="token selector">p</span> <span class="token punctuation">{</span>
<span class="token comment">/* This will produce a value between 14px and 32px */</span>
<span class="token property">font-size</span><span class="token punctuation">:</span> <span class="token function">clamp</span><span class="token punctuation">(</span>14px<span class="token punctuation">,</span> 1.5vw<span class="token punctuation">,</span> 32px<span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token punctuation">}</span></code></pre>
<p>I tend to set these values by eye, but because we're working with functions we have the whole toolkit of mathematics to draw on if necessary. For instance, we could use linear algebra to calculate minimum and maximum values that correspond to specific screen sizes<sup class="footnote-ref"><a href="https://maxkohler.com/posts/continuous-typography/#fn2" id="fnref2">2</a></sup>, or linear regression to derive a curve from a given set of absolute values. <sup class="footnote-ref"><a href="https://maxkohler.com/posts/continuous-typography/#fn3" id="fnref3">3</a></sup></p>
<h3 id="functions-with-multiple-parameters">Functions with multiple parameters</h3>
<figure class="post-figure medium">
<img alt="A plane is drawn on a 3d-coordinate system. Caption: Font size = Screen width × 0.01 + Reader's default font size × 0.85" loading="lazy" src="https://maxkohler.com/assets/continuous-type/function-2d.svg" />
<figcaption>
<span class="figure__caption">
<p>If we define the font size as a function of the screen size and the reader's default font size, it forms a plane.</p>
</span>
</figcaption>
</figure>
<p>So far, we've only looked at functions with a single input parameter – the screen width. But that's not the only input we can use. For instance, it's probably a good idea to take into account the default font size the reader has set up in their device settings, in addition to the size of their screen. <sup class="footnote-ref"><a href="https://maxkohler.com/posts/continuous-typography/#fn4" id="fnref4">4</a></sup></p>
<p>We can use the <code>calc()</code> keyword to do this in CSS<sup class="footnote-ref"><a href="https://maxkohler.com/posts/continuous-typography/#fn5" id="fnref5">5</a></sup>:</p>
<pre class="language-css"><code class="language-css"><span class="token selector">p</span> <span class="token punctuation">{</span>
<span class="token property">font-size</span><span class="token punctuation">:</span> <span class="token function">calc</span><span class="token punctuation">(</span>1vw + 0.85rem<span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token punctuation">}</span></code></pre>
<p>Here we're saying: <em>The font size is equal to the width of the screen multiplied by 0.01, plus the reader's default font size multiplied by 0.85</em>. If we draw this function onto a coordinate system, its output values no longer form a line but a <em>plane</em>; our typographic intent has gained a second dimension.</p>
<p>There is no limit to the number of input parameters our functions can draw on. The reader's connection speed, whether they have dark mode enabled<sup class="footnote-ref"><a href="https://maxkohler.com/posts/continuous-typography/#fn6" id="fnref6">6</a></sup>, their reading distance, their preferred language, even the time of day at their location may all be useful parameters for multi-dimensional typographic systems.</p>
<p>The output of one function can become the input parameter of another, too. This is exactly what happens when we set properties like <code>line-height</code> to a unitless value: it quietly pulls in the current font size as a parameter.</p>
<pre class="language-css"><code class="language-css"><span class="token selector">p</span> <span class="token punctuation">{</span>
<span class="token property">font-size</span><span class="token punctuation">:</span> <span class="token function">calc</span><span class="token punctuation">(</span>1vw + 0.85rem<span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token property">line-height</span><span class="token punctuation">:</span> 1.3<span class="token punctuation">;</span> <span class="token comment">/* = font-size * 1.3 = (1vw + 0.85rem) * 1.3 */</span>
<span class="token punctuation">}</span></code></pre>
<h3 id="non-linear-functions">Non-linear functions</h3>
<figure class="post-figure medium">
<img alt="A curved plane is drawn on a 3d coordinate system. Caption reads: Font size = f(x,y)." loading="lazy" src="https://maxkohler.com/assets/continuous-type/function-wave.svg" />
<figcaption>
</figcaption>
</figure>
<p>So far we've only looked at <em>linear functions</em>, or functions that produce straight lines when drawn on a coordinate system. But there is no conceptual reason our typography should be limited to these. It's entirely possible we may need exponential, sinusoid, stepped, randomised, or yet more exotic function types to achieve specific typographic expressions.</p>
<p>As I write this, there is no simple way to do this in CSS. It is possible to stitch together multiple linear functions using media queries, and so approximate more complex curves, but the code quickly becomes unwieldy. Sass includes a powerful math module which can be used to abstract some of this complexity away, but a barrier to entry remains. <sup class="footnote-ref"><a href="https://maxkohler.com/posts/continuous-typography/#fn7" id="fnref7">7</a></sup>.</p>
<p>Mike Riethmuller (who developed both of those solutions) suggests that a better way to achieve these non-linear functions in CSS would be to make the <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/easing-function">Easing Module</a> available outside of the animation context, to which it is currently bound<sup class="footnote-ref"><a href="https://maxkohler.com/posts/continuous-typography/#fn8" id="fnref8">8</a></sup>. This would be an elegant solution indeed: the easing module supports many useful function types (including Bezier curves, which typographers are already familiar with) in addition to basic linear functions, and many design tools already include powerful interfaces to edit these curves visually.</p>
<p>The relevant issue on the CSS Working group <a href="https://github.com/w3c/csswg-drafts/issues/581">is still open</a>, so we likely won't see a browser implementation of this soon.</p>
<hr />
<figure class="post-figure big">
<img alt="A paragraph and small graph diagrams." loading="lazy" src="https://maxkohler.com/assets/continuous-type/paragraph-fluid.svg" />
<figcaption>
<span class="figure__caption">
<p>A block of text typeset with continuous functions.</p>
</span>
<span class="figure__source">
<p>Sample text from <em>Flexible Typesetting</em> by Tim Brown.</p>
</span>
</figcaption>
</figure>
<p>But regardless of the precise implementation, I think that the idea that that any typographic attribute (including variable font parameters) can be a function (linear, exponential, stepped, Bezier, random, or otherwise) of any given input variable (user preference, screen dimensions, connection speed, time of day, display language, or whatever else) is an incredibly powerful one, and worth exploring as an aesthetic as well as a technical proposition. I'm already using basic linear functions in practice with promising results.</p>
<p>I'm especially interested in what a visual design tool would look like if it was built on the model of continuous typography. Tim Brown makes this point in Flexible Typesetting (2018), writing: _"Your design tool is working against you. It is stuck in the traditional mindset of absolute measurements. This is precisely one reason why people very good at web design argue that designers should learn to write code. No mainstream design tools […] are completely appropriate for the practice of typesetting today."<sup class="footnote-ref"><a href="https://maxkohler.com/posts/continuous-typography/#fn9" id="fnref9">9</a></sup></p>
<p>To my knowledge this situation hasn't changed much since - so there's plenty of room for exploration. With better tools, continuous typography might become more than a way to <em>make the type look good on a phone</em>: a new method for visual expression in its own right. <sup class="footnote-ref"><a href="https://maxkohler.com/posts/continuous-typography/#fn10" id="fnref10">10</a></sup></p>
<h2 id="update-february-2%2C-2021">Update February 2, 2021</h2>
<p>I finally got around to writing a demo of what a design tool for continuous typography might look like - basically a working version of the final figure above. <a href="https://awesomephant.github.io/continuous-typography/">Play with it here</a>, or <a href="https://maxkohler.com/work/continuous-type-tester/">read more about here</a>.</p>
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>Robert Bringhurst (2016): <em>The Elements of Typographic Style, Version 4.2</em>, Chapter 8. Hartley & Marks. <a href="https://maxkohler.com/posts/continuous-typography/#fnref1" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn2" class="footnote-item"><p>Pedro Rodriguez (2020): *<a href="https://css-tricks.com/linearly-scale-font-size-with-css-clamp-based-on-the-viewport/">Linearly Scale font-size with CSS clamp() Based on the Viewport</a> <a href="https://maxkohler.com/posts/continuous-typography/#fnref2" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn3" class="footnote-item"><p>Jake Wilson (2017): <em><a href="https://medium.com/@jakobud/css-polyfluidsizing-using-calc-vw-breakpoints-and-linear-equations-8e15505d21ab">CSS Poly Fluid Sizing using calc(), vw, breakpoints and linear equations</a></em> <a href="https://maxkohler.com/posts/continuous-typography/#fnref3" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn4" class="footnote-item"><p>In fact, the user’s default font size should probably be the first parameter we care about. The only reason I’m using the screen width here is that its effects are easier to visualise. <a href="https://maxkohler.com/posts/continuous-typography/#fnref4" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn5" class="footnote-item"><p>To my knowledge the earliest description of this technique is a 2015 article by Mike Riethmuller called <em><a href="https://www.madebymike.com.au/writing/precise-control-responsive-typography/">Precise control over responsive typography</a></em> <a href="https://maxkohler.com/posts/continuous-typography/#fnref5" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn6" class="footnote-item"><p>Greg Gibson (2020): <em><a href="https://css-tricks.com/using-css-custom-properties-to-adjust-variable-font-weights-in-dark-mode/">Using CSS Custom Properties to Adjust Variable Font Weights in Dark Mode </a></em> <a href="https://maxkohler.com/posts/continuous-typography/#fnref6" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn7" class="footnote-item"><p>Mike Riethmuller (2017): <em><a href="https://www.madebymike.com.au/writing/non-linear-interpolation-in-css/">Non-linear Interpolation in CSS: A solution for transitioning lengths values in CSS through more than one bending point.</a></em> <a href="https://maxkohler.com/posts/continuous-typography/#fnref7" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn8" class="footnote-item"><p>Mike Riethmuller (2018): <em><a href="https://www.madebymike.com.au/writing/interpolation-without-animation/">Interpolation in CSS without animation</a></em> <a href="https://maxkohler.com/posts/continuous-typography/#fnref8" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn9" class="footnote-item"><p>Tim Brown (2018): <em>Flexible Typesetting</em>, p 44. A Book Apart. <a href="https://maxkohler.com/posts/continuous-typography/#fnref9" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn10" class="footnote-item"><p>This post is also <a href="https://maxakohler.medium.com/continuous-typography-15759ac4ae62">on Medium</a> <a href="https://maxkohler.com/posts/continuous-typography/#fnref10" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
It’s Probably Art2020-12-06T00:00:00Zhttps://maxkohler.com/posts/monolith/<p><span class="leadin">The Utah Monolith</span> stood at 38°20'35.18" North, 109°39'58.32" West in Red Rock Country, Utah, on a piece of flat ground between two diverging rock faces. It was installed there by an unknown artist sometime in 2016 (that's when it appears in satellite images), discovered by wildlife officials on November 23, 2020, and removed ten days later by four unknown men. <sup class="footnote-ref"><a href="https://maxkohler.com/posts/monolith/#fn1" id="fnref1">1</a></sup></p>
<p>It was ten to twelve feet tall, and made of polished sheet metal riveted together along the edges, forming a slender, triangular column. Until its removal, the structure was embedded (possibly cemented) into a hole of the same cross section, probably cut into the hard ground with a concrete saw.</p>
<p>In the handful of photographs that exist of the monolith, its formal beauty is clearly visible: Its geometric form, sharp corners and flat sides, the silvery shine of its metal surface form a striking contrast to the weathered, reddish-brown rock of the surrounding landscape. The sculpture is framed by the near-vertical rock face on either side; the symmetry gives the scene a feeling of gravity, like a shrine carved from the earth.</p>
<p>The monolith related, too, to the park rangers (and later, the handful of ambitious hikers) encountering it. Its height of twelve feet - twice as tall as the average man - looks like an artistic choice rooted in classical ideas of proportion. It's not a million miles away from a Greek column.</p>
<p>But this sense of aesthetic familarity is unsettled by the apparent lack of any material relation between the monolith and the earth around it. Apart from the conspicuous absence of vegetation, the ground bears no obvious signs of work. There is no plinth; the monolith has no base or visible support; it's as if it had simply emerged from the earth already in its finished, inert state. Its matte surface looks too flawless for an object exposed to the elements, and produces little reflection of the surrounding landscape. If it weren't for the faintly visible rivets, and a small mound of loose earth by its base, you might mistake it for a misplaced video game asset clipping through the ground from another realm.</p>
<p>As you observe it, the monolith seems to oscillate between these two modes of dialogue with its surroundings - one rooted in familiar elements of composition and proportion, the other in its otherwordly materiality - never quite reaching an equilibrium.</p>
<hr />
<p><span class="leadin">Like most people</span>, I learned about the monolith from the news. The New York Times’ opening paragraph – <em>"A team surveying bighorn sheep for Utah’s wildlife agency found the strange object, 10 to 12 feet tall, embedded in the ground in a remote part of Red Rock Country. It’s probably art, officials said"</em> - sounded too much like the fake news reports at the beginning of a disaster fiction film to be ignored.</p>
<p>The rest of the story was uneventful, consisting mostly of statements from local officials confirming that they had no idea what the object was or how it got there, either. They declined to give the precise location of the monolith for fear of potential visitors becoming stranded in the remote desert and needing rescue.</p>
<p>As I read through these non-statements, I couldn’t help but imagine the rest of the movie evoked by that first paragraph:</p>
<pre class="language-text"><code class="language-text">EXT. DESERT - MORNING
GENERAL: How long ‘till we get this goddamn thing dug up?
Govenor wants it gone before the tourists start showin’ up.
SCIENTIST: Sir, we’ve been digging all night, but it
doesn’t seem to, uh, end…
GENERAL: You mean to tell me it goes all the way through
the earth? Are you out of your mind lieutenant?</code></pre>
<p>Cut to a research ship on the Indian ocean, on the opposite side of the Earth, you get the idea.</p>
<figure class="post-figure small">
<img alt="Monolith" loading="lazy" src="https://maxkohler.com/assets/monolith/IMG_3534.jpeg" />
<figcaption>
<span class="figure__caption">
<p>IMG_3534</p>
</span>
<span class="figure__source">
<p><a href="https://dpsnews.utah.gov/dps-aero-bureau-encounters-monolith-in-red-rock-country/">Utah Department of Public Safety</a></p>
</span>
</figcaption>
</figure>
<p><span class="leadin">For all its sculptural qualities</span>, most of us never experienced the monolith as a sculpture, but as photographs of one, appearing (to paraphrase the art critic John Berger) not on the austere walls of a gallery, but the screens of our phones and laptops in our homes.</p>
<p>We have been looking at works of art in this way for a long time: Berger wrote his seminal book <em>Ways of Seeing</em> in 1972<sup class="footnote-ref"><a href="https://maxkohler.com/posts/monolith/#fn2" id="fnref2">2</a></sup>, when television sets had just entered the home, and Walter Benjamin saw the trend in the 19th century, when photo-engraving had made the mass-reproduction of paintings possible for the first time. But usually there is at least the option (however theoretical) to go see a work in person. This wasn't the case with the monolith - not only was it installed in an inaccessible location, but we've been confined to our homes for months; and travel between states, let alone countries is a distant memory for most of us.</p>
<p>All we have are the four still photographs and three short videos of the sculpture released by the Utah Department of Public Safety <sup class="footnote-ref"><a href="https://maxkohler.com/posts/monolith/#fn3" id="fnref3">3</a></sup> (I'll refer to them here by their filenames).</p>
<p>In <em>Monolith.mp4</em>, which appears to be the first in the sequence, we see three men in green overalls descending a slope and walking slowly toward the monolith. The fourth man, who is holding the camera, comments: <em>"Okay, the intrepid explorers go down to investigate the, uh, alien life form"</em>. In the following images<sup class="footnote-ref"><a href="https://maxkohler.com/posts/monolith/#fn4" id="fnref4">4</a></sup> the camera has followed the men down the slope, and we see them examining the monlith closer. Though the camera moves around throughout the sequence, we always see the monolith from a more or less frontal perspective.</p>
<figure class="post-figure small">
<img alt="Monolith" loading="lazy" src="https://maxkohler.com/assets/monolith/IMG_3946-2.jpeg" />
<figcaption>
<span class="figure__caption">
<p>IMG_3946</p>
</span>
<span class="figure__source">
<p><a href="https://dpsnews.utah.gov/dps-aero-bureau-encounters-monolith-in-red-rock-country/">Utah Department of Public Safety</a></p>
</span>
</figcaption>
</figure>
<p><em>3946</em> is particularly evocative. We see two men, both wearing olive-green overalls and heavy boots, forming an element of repetition against the singular monolith. The worn rock face fills the background. One the men stands on the other's shoulders; his head reaching just above the monolith. He is holding two corners of the monolith to stablise himself as he turns slightly toward the left, looking across the top of the monolith at a point on the rock wall a few feet away. The mobile phone's camera brings the entire scene into sharp focus.</p>
<p>It's perhaps the image that most effectively captures the <em>sublime</em> element of the scene - that elusive quality of physical, spiritual, and aesthetic greatness beyond human comprehension sought by artists since the 18th century.<sup class="footnote-ref"><a href="https://maxkohler.com/posts/monolith/#fn5" id="fnref5">5</a></sup> Here we see two men doing their very best to understand the object in front of them; we might even read the act of standing on one another's shoulders as a metaphor for the scientific method. But on top of the monolith, there is no knowledge to be found - only the towering, impenetrable, ancient rock face beyond.</p>
<figure class="post-figure small">
<img alt="A lone hunter is seen among tall, dark trees in a winter landscape." loading="lazy" src="https://maxkohler.com/assets/monolith/chasseur.jpeg" />
<figcaption>
<span class="figure__caption">
<p>Caspar-David Friedrich: The Chasseur in the Forest (1814). Oil on canvas, 66×47cm. Private collection.</p>
</span>
<span class="figure__source">
<p>Public Domain, via <a href="https://commons.wikimedia.org/wiki/File:Caspar_David_Friedrich_068.jpg">Wikimedia Commons</a></p>
</span>
</figcaption>
</figure>
<p>This sense of awe in the face of Nature is often associated with works by the English painter <a href="https://www.tate.org.uk/art/artworks/turner-fishermen-at-sea-t01585">J. M. W. Turner</a> (1775-1851), but he never did much for me (maybe I grew up too far from the sea). I recognise it more easily in the stillness of a painting like Caspar David Friedrich's <em>Chasseur in the Forest</em> (1814). Here, too, the human figure shrinks away against an overwhelming landscape, the black trees resisting any attempt at enlightenment.</p>
<hr />
<p><span class="leadin">The Monolith's location</span> didn't stay secret for long. Less than a day after the <em>Times</em> published their story, a Reddit user named Bear__Fucker had found the monolith on Google Earth<sup class="footnote-ref"><a href="https://maxkohler.com/posts/monolith/#fn6" id="fnref6">6</a></sup>, using public data about the helicopter’s flight path and clues like the colour of rock, and the shape of the hills in the distance visible in the published photographs.</p>
<p>After a few failed attempts at getting <a href="https://earth.google.com/web/search/38%c2%b020%2735.18%22N+109%c2%b039%2758.32%22W/@38.3431056,-109.6662,1318.27402438a,807.2294659d,35y,0h,45t,0r/data=CmIaOBIyGYiJYeLqK0NAIRcmUwWjalvAKh4zOMKwMjAnMzUuMTgiTiAxMDnCsDM5JzU4LjMyIlcYAiABIiYKJAkqQIIH6PA3QBEqQIIH6PA3wBlkqz2e96tKQCFhqz2e96tKwCgC">the link</a> to load up on my laptop, I copied their coordinates and went looking for the place myself. The ten minutes I spent moving my cursor slowly across the endless Utah desert while eyeing the position display in the corner of the screen were strangely exhilerating; like a real-life treasure hunt with a supernatural undercurrent.</p>
<p>Finally it appeared: a thin black line across the washed-out ground. Here was <em>proof</em> that the thing really existed, and had done for years.</p>
<p>It occured to me that the monolith was just the right size to be viewed by satellite: large enough to be clearly visible in public images without revealing too much of the mystery. It reminded me of those satellite calibration targets elsewhere in the desert.<sup class="footnote-ref"><a href="https://maxkohler.com/posts/monolith/#fn7" id="fnref7">7</a></sup></p>
<hr />
<p><span class="leadin">It's no accident</span>, I think, that the images of the monolith are so compelling: The scene feels like it was always designed to be photographed.</p>
<p>Its romantic symmetry only works from "the front", so that's where the photographer naturally positions themselves. The monolith's scale keeps them from getting too close, cropping it would look <em>wrong</em>. Even the lighting was considered: had the monolith been placed a few feet back, it would disappear in the shadow.</p>
<p>In this view, the monolith is less of a sculpture and more of a <em>prop</em> in an elaborate outdoor set. The wildlife officials and hikers become unwitting extras.</p>
<figure class="post-figure small">
<img alt="The Utah monolith is toppled in a blurry cellphone photo" loading="lazy" src="https://maxkohler.com/assets/monolith/toppled.PNG" />
<figcaption>
<span class="figure__caption">
<p>The Monolith toppled</p>
</span>
<span class="figure__source">
<p>Michael James Newlands / <a href="https://www.nytimes.com/2020/12/01/arts/design/utah-monolith-removed-instagram.html">The New York Times</a></p>
</span>
</figcaption>
</figure>
<p>In fact, the cellphone images of the toppled Monolith show that it was built exactly like the prototypical Hollywood prop: not of solid metal, but thin sheets of aluminium mounted to a hidden plywood frame <sup class="footnote-ref"><a href="https://maxkohler.com/posts/monolith/#fn8" id="fnref8">8</a></sup>.</p>
<p>From here it's only a small leap to an earlier version of the same prop: the alien monolith in Stanley Kubrick's <em>2001</em> (1968). It was twelve feet tall (just like the Utah Monolith, and the pyramid in Arthur C. Clarke's original short story<sup class="footnote-ref"><a href="https://maxkohler.com/posts/monolith/#fn9" id="fnref9">9</a></sup>), though of a different cross-section and made of wood covered in black paint and graphite powder.<sup class="footnote-ref"><a href="https://maxkohler.com/posts/monolith/#fn10" id="fnref10">10</a></sup> It appears in each of the film’s four episodes, but the first two seem most significant here: Against the background of a prehistoric desert landscape, among a group of hominids at the dawn of civilisation, and later at the bottom of a starkly-lit excavation site on the Moon, surrounded by weary astronauts.</p>
<p>We clearly see the impact of Kubrick's photography on whoever set up the Utah Images. The shock of the sharp-edged, artificial object against the weathered landscape, the group of weary explorers decending towards it, even the spacesuits are mirrored here. The reference is so clear that the man holding the camera in <em>Monolith.mp4</em> recognises it within seconds.</p>
<p>Kubrick’s film in turn follows photographs and paintings of the sublime landscape of earlier periods, both in its theme and its aesthetics. I wouldn't be surprised to find a photocopy of <em>The Chasseur</em> somewhere in Kubrick's vast archive.<sup class="footnote-ref"><a href="https://maxkohler.com/posts/monolith/#fn11" id="fnref11">11</a></sup></p>
<p>But the chain of influence runs the other way, too: Our whole notion of <em>The Landscape</em>, and in some cases its physical reality are themselves cultural productions. In the 18th century, European aristocrats planted trees, dug lakes, and built prop ruins of Greek or Roman temples to bring their land closer to what they had seen in paintings of their day. <sup class="footnote-ref"><a href="https://maxkohler.com/posts/monolith/#fn12" id="fnref12">12</a></sup></p>
<p>In the 19th century photography, having inherited the visual language of painting, helped shape the fantasy of the great, "untouched" American landscape that still lingers today. Photographers like Carleton Watkins didn’t “discover” places like Yosemite Valley, but constructed them with complex optical machinery and painstaking work in the darkroom. <sup class="footnote-ref"><a href="https://maxkohler.com/posts/monolith/#fn13" id="fnref13">13</a></sup></p>
<figure class="post-figure embed-container post">
<div class="embed-placeholder">
<p>
This page contains embedded content from <a href="https://vimeo.com/">Vimeo</a>, who might use cookies and other technologies to track you. To view this content, click <em>Allow Vimeo content</em>.
</p>
<button class="embed-load button">Allow Vimeo content</button>
</div>
<div class="embed" style="padding:45% 0 0 0;position:relative;">
<iframe data-src="https://player.vimeo.com/video/70173915?autoplay=0&loop=1&title=0&byline=0&portrait=0&muted=1" style="position:absolute;top:0;left:0;width:100%;height:100%;" frameborder="0" allow="autoplay; fullscreen" allowfullscreen=""></iframe>
</div>
<figcaption>
<span class="figure-caption">
<p><em>2001</em>'s Star Gate sequence (1968)</p>
</span>
</figcaption>
</figure>
<p>In 1968, those familiar images of the American Landscape re-appear in <em>2001</em>’s terrifying <em>Star Gate</em> sequence. As we're transported through interstellar space, we see them disfigured by swirling photochemicals and distorted glass, all dissolving into a pool of pure, acidic colour. It’s the ultimate upside-down of the stately silver-gelatin prints of the previous century, but the lineage is there nontheless.</p>
<p>Kubrick shot that sequence just a hundred miles south of where the Utah Monolith appeared sometime in 2016: The latest statement in a century-long dialogue between image-making and the landscape.</p>
<hr />
<p>Over the following days, I kept following the news stories about the monolith. There was the question of attribution: The gallerist David Zwirner made, then walked back a statement saying it was a work by the minimalist sculptor John McKracken (who's estate Zwirner happens to represent).<sup class="footnote-ref"><a href="https://maxkohler.com/posts/monolith/#fn14" id="fnref14">14</a></sup> A few other artists where floated, but swiftly issued denials.</p>
<p>A handful of social media users went to see the monolith in real life, including one ex-military man who drove 200 miles through the night to be there first (I admire the commitment).<sup class="footnote-ref"><a href="https://maxkohler.com/posts/monolith/#fn15" id="fnref15">15</a></sup></p>
<p>Finally it disappeared, ten days after it was discovered. According to a couple of eyewitnesses, four men made it "their mission" to return the landscape to its "natural state"<sup class="footnote-ref"><a href="https://maxkohler.com/posts/monolith/#fn16" id="fnref16">16</a></sup>, itself an act loaded with aesthetic and linguistic baggage. I don't exactly know how to feel about it. On one hand it feels like a loss; I enjoyed the idea of this unexplained object out there in the desert, unaffected by the world around it - as a friend of mine put it in a text message, <em>it’s nice to feel an ounce of magic in a shitty time</em>.</p>
<p>But on the other hand, I think the monolith's appeal was never really about the physical thing anyway; that was just a prop. What’s more important is the cultural output inspired by it: the images, videos, news reports, collective speculation, and even the after-dark performance of its destruction. That collective body of work survives.</p>
<p>In this year of never-ending crisis, where any attempt to look more than a few days into the future seems utterly hopeless, and our movements have become small and repetetive, the Utah Monolith managed what many online art experiences struggled to do: For a moment, it led our gaze, and our mind away from the world immediately in front of us: up, toward the stars ★</p>
<p class="note">
This story first appeared <a href="https://maxakohler.medium.com/its-probably-art-b554f7c5f3e0">on Medium</a>.
</p>
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>Alan Yuhas / The New York Times (2020): <em><a href="https://www.nytimes.com/2020/11/24/us/Utah-monolith-red-rock-country.html">A Weird Monolith Is Found in the Utah Desert</a></em> <a href="https://maxkohler.com/posts/monolith/#fnref1" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn2" class="footnote-item"><p>John Berger (1972): <em>Ways of Seeing</em>. Penguin Books. <a href="https://maxkohler.com/posts/monolith/#fnref2" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn3" class="footnote-item"><p>Utah Department of Public Safety (2020): <em><a href="https://dpsnews.utah.gov/dps-aero-bureau-encounters-monolith-in-red-rock-country/">DPS Aero Bureau Encounters Monolith in Red Rock Country</a></em> <a href="https://maxkohler.com/posts/monolith/#fnref3" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn4" class="footnote-item"><p>3526, 3527, 3528, 3532, 3534, 3546, and monolith.jpg. I wonder what happened to the missing files in the sequence. <a href="https://maxkohler.com/posts/monolith/#fnref4" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn5" class="footnote-item"><p>Christine Riding and Nigel Llewellyn (2013), ‘British Art and the Sublime’, in Nigel Llewellyn and Christine Riding (eds.), <em><a href="https://www.tate.org.uk/art/research-publications/the-sublime/christine-riding-and-nigel-llewellyn-british-art-and-the-sublime-r1109418">The Art of the Sublime</a></em>, Tate Research Publication. <a href="https://maxkohler.com/posts/monolith/#fnref5" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn6" class="footnote-item"><p>Reddit/DOTTheMath (2020): <em><a href="https://www.reddit.com/r/geoguessr/comments/jzw628/help_me_find_this_obelisk_in_remote_utah/gdfapzw/">Help me find this obelisk in remote Utah wilderness</a></em> <a href="https://maxkohler.com/posts/monolith/#fnref6" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn7" class="footnote-item"><p>Atlas Obscura / randalscott: <em><a href="https://www.atlasobscura.com/places/corona-satellite-calibration-targets">Corona Satellite Calibration Targets</a></em> <a href="https://maxkohler.com/posts/monolith/#fnref7" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn8" class="footnote-item"><p>Incidentally, you could see this as an argument against the idea that the Monolith is a sculpture by McKracken - <a href="http://www.artnet.com/artists/john-mccracken/atum-RWkQgub1KfhuE0iRHMsQCA2">his are made of stainless steel</a>. <a href="https://maxkohler.com/posts/monolith/#fnref8" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn9" class="footnote-item"><p>Arthur C. Clarke (1948): <em>The Sentinel</em>, published 1951 as <em>Sentinel of Eternity</em>. Available on the <a href="https://archive.org/stream/10_Story_Fantasy_v01n01_1951-Spring_Tawrast-EXciter#page/n39/mode/2up">Internet Archive</a> <a href="https://maxkohler.com/posts/monolith/#fnref9" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn10" class="footnote-item"><p>Bruce Handy / Vanity Fair (2014): <em><a href="https://www.vanityfair.com/hollywood/2014/07/unseen-images-2011-space-odyssey-making">Weird, Unseen Images from the Making of 2001: A Space Odyssey</a></em> <a href="https://maxkohler.com/posts/monolith/#fnref10" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn11" class="footnote-item"><p>Jon Ronson / The Guardian (2004): <em><a href="https://www.theguardian.com/film/2004/mar/27/features.weekend">Citizen Kubrick</a></em> <a href="https://maxkohler.com/posts/monolith/#fnref11" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn12" class="footnote-item"><p>Paul Cooper / The Atlantic (2018): <em><a href="https://www.theatlantic.com/science/archive/2018/04/fake-ruins-europe-trend/558293/">Europe Was Once Obsessed With Fake Dilapidated Buildings</a></em>" <a href="https://maxkohler.com/posts/monolith/#fnref12" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn13" class="footnote-item"><p>Ana Cecilia Alvarez (2019) in Real Life Magazine: <em><a href="https://reallifemag.com/look-for-america/">Look for America: How Land became scenery</a></em> <a href="https://maxkohler.com/posts/monolith/#fnref13" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn14" class="footnote-item"><p>Amanda Holpuch / The Guardian (2020): <em><a href="https://www.theguardian.com/us-news/2020/nov/24/monolith-utah-theories-what-is-it-mystery">Theories abound over mystery metal monolith found in Utah </a></em> <a href="https://maxkohler.com/posts/monolith/#fnref14" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn15" class="footnote-item"><p>Alexandra Mae Jones / CTV News (2020): <em><a href="https://www.ctvnews.ca/lifestyle/hiker-drove-six-hours-into-utah-desert-to-see-metal-monolith-before-it-vanished-1.5211323">Hiker drove six hours into Utah desert to see metal monolith before it vanished</a></em> <a href="https://maxkohler.com/posts/monolith/#fnref15" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn16" class="footnote-item"><p>Serge F. Kovaleski, Deborah Solomon and Zoe Rosenberg / The New York Times (2020): <em><a href="https://www.nytimes.com/2020/12/01/arts/design/utah-monolith-removed-instagram.html">How a Mysterious Monolith Vanished Overnight (It Wasn’t Aliens)</a></em> <a href="https://maxkohler.com/posts/monolith/#fnref16" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
How to use CSV data with Eleventy2021-02-06T00:00:00Zhttps://maxkohler.com/posts/eleventy-csv/<p>I like to <a href="https://maxkohler.com/work/digital-direction/">use spreadsheets as a CMS</a> for one-off website projects. This means I author whatever information I need in Google Sheets, export it to a CSV file, and throw that file into a static site generator to produce the HTML pages I need.</p>
<p>Eleventy doesn't have a built-in way to do that. It does have a concept of <a href="https://www.11ty.dev/docs/data-global/">global data files</a>, but those only support <code>json</code> files out of the box. If you just throw your CSV into the <code>_data</code> folder, nothing happens.</p>
<p>But there's another feature that does allow us to do this: <a href="https://www.11ty.dev/docs/data-js/">Javascript Data Files</a>. Instead of a static JSON file, we can put a Javascript file into the data folder that <code>exports</code> whatever data we need. Eleventy executes that file, and adds the output to its global data object, making it available in template files.</p>
<p>We can use this to parse our CSV file, and hand the data over to Eleventy. I'm using <a href="https://csv.js.org/parse/">csv-parse</a> here.</p>
<p>Install it with <code>npm install csv-parse</code>.</p>
<p>Then we can write a script like this:</p>
<pre class="language-js"><code class="language-js"><span class="token keyword">const</span> parse <span class="token operator">=</span> <span class="token function">require</span><span class="token punctuation">(</span><span class="token string">"csv-parse/lib/sync"</span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token keyword">const</span> fs <span class="token operator">=</span> <span class="token function">require</span><span class="token punctuation">(</span><span class="token string">"fs"</span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token keyword">function</span> <span class="token function">readCSV</span><span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token punctuation">{</span>
<span class="token keyword">const</span> input <span class="token operator">=</span> fs<span class="token punctuation">.</span><span class="token function">readFileSync</span><span class="token punctuation">(</span><span class="token string">"./_data/values.csv"</span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token keyword">const</span> records <span class="token operator">=</span> <span class="token function">parse</span><span class="token punctuation">(</span>input<span class="token punctuation">,</span> <span class="token punctuation">{</span>
<span class="token literal-property property">columns</span><span class="token operator">:</span> <span class="token boolean">true</span><span class="token punctuation">,</span>
<span class="token literal-property property">skip_empty_lines</span><span class="token operator">:</span> <span class="token boolean">true</span><span class="token punctuation">,</span>
<span class="token punctuation">}</span><span class="token punctuation">)</span><span class="token punctuation">;</span>
console<span class="token punctuation">.</span><span class="token function">log</span><span class="token punctuation">(</span><span class="token template-string"><span class="token template-punctuation string">`</span><span class="token interpolation"><span class="token interpolation-punctuation punctuation">${</span>records<span class="token punctuation">.</span>length<span class="token interpolation-punctuation punctuation">}</span></span><span class="token string"> records found.</span><span class="token template-punctuation string">`</span></span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token keyword">return</span> records<span class="token punctuation">;</span>
<span class="token punctuation">}</span>
module<span class="token punctuation">.</span><span class="token function-variable function">exports</span> <span class="token operator">=</span> <span class="token keyword">function</span> <span class="token punctuation">(</span><span class="token punctuation">)</span> <span class="token punctuation">{</span>
<span class="token keyword">const</span> data <span class="token operator">=</span> <span class="token function">readCSV</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token keyword">return</span> data<span class="token punctuation">;</span>
<span class="token punctuation">}</span><span class="token punctuation">;</span></code></pre>
<p>We'll save that file as <code>myData.js</code> inside the <code>_data</code> folder, next to our original CSV file. As with regular data files, the filename controls under which key the data will be available. Once that's done, we can write template code like this, and it works just as expected:</p>
<pre class="language-liquid"><code class="language-liquid"><span class="token liquid language-liquid"><span class="token delimiter punctuation">{%</span> <span class="token keyword">for</span> row <span class="token keyword">in</span> myData <span class="token delimiter punctuation">%}</span></span>
<span class="token liquid language-liquid"><span class="token delimiter punctuation">{{</span> row<span class="token punctuation">.</span>title <span class="token delimiter punctuation">}}</span></span>
<span class="token liquid language-liquid"><span class="token delimiter punctuation">{%</span> <span class="token keyword">endfor</span> <span class="token delimiter punctuation">%}</span></span></code></pre>
<p>We could also <a href="https://www.11ty.dev/docs/pages-from-data/">use Eleventy's pagination feature</a> to turn our data into individual pages.</p>
<p>The idea that you can run arbitrary Javascript pipe the results right into Eleventy's data object is pretty powerful. The documentation gives examples of <a href="https://www.11ty.dev/docs/data-js/#example-using-graphql">fetching data from an API</a> and <a href="https://www.11ty.dev/docs/data-js/#example-exposing-environment-variables">exposing environment variables</a>, but you could also do calculations, parse data in any format, or anything else you could need.</p>
<h2 id="update">Update</h2>
<p>Five minutes after I wrote this, I realised that Eleventy has a built-in way to add <a href="https://www.11ty.dev/docs/data-custom/">support for custom data formats</a>. Javascript data files are still the way to go if you're fetching data from an API or other doing other fanciness, but to read data from a CSV file adding this to your <code>.eleventy</code> file will do the trick:</p>
<pre class="language-js"><code class="language-js">eleventyConfig<span class="token punctuation">.</span><span class="token function">addDataExtension</span><span class="token punctuation">(</span><span class="token string">"csv"</span><span class="token punctuation">,</span> <span class="token punctuation">(</span><span class="token parameter">contents</span><span class="token punctuation">)</span> <span class="token operator">=></span> <span class="token punctuation">{</span>
<span class="token keyword">const</span> records <span class="token operator">=</span> <span class="token function">parse</span><span class="token punctuation">(</span>contents<span class="token punctuation">,</span> <span class="token punctuation">{</span>
<span class="token literal-property property">columns</span><span class="token operator">:</span> <span class="token boolean">true</span><span class="token punctuation">,</span>
<span class="token literal-property property">skip_empty_lines</span><span class="token operator">:</span> <span class="token boolean">true</span><span class="token punctuation">,</span>
<span class="token punctuation">}</span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token keyword">return</span> records<span class="token punctuation">;</span>
<span class="token punctuation">}</span><span class="token punctuation">)</span><span class="token punctuation">;</span></code></pre>
Want to write a hover effect with inline CSS? Use CSS Variables.2021-03-26T00:00:00Zhttps://maxkohler.com/posts/2020-03-26-inline-css-hover-css-variables/<p>I just published a little technique for pushing what you can do with CSS inline styles over on CSS Tricks. Here's how I pitched it:</p>
<blockquote>
<p>Say you have a blog, and you want each post to have a different background colour when you hover over it - for art direction, say. You can't do that with inline styles! But I learned a trick: You write the colour value into a CSS variable (scoped to the post element), then use that to define the hover effect in your regular CSS.</p>
</blockquote>
<p>This works for hover effects, but comes in handy in other situations too. For instance, I use it to achieve the <code>position: sticky</code> effect on <a href="https://maxkohler.com/">this site</a>. That's in fact where I learned the trick.</p>
<p>I go into a little more detail <a href="https://css-tricks.com/want-to-write-a-hover-effect-with-inline-css-use-css-variables/">in the article</a>.</p>
What's a Content Management System?2021-11-20T00:00:00Zhttps://maxkohler.com/posts/2021-11-20-whats-a-cms/<h2 id="who-this-article-is-for">Who this article is for</h2>
<p>This was originally written as a seminar for undergraduate design students, but it'll work for anyone who is comfortable with HTML. If you've built a few websites in HTML and CSS and are ready to take on bigger projects, read on.</p>
<h2 id="the-problem">The Problem</h2>
<p>Your first few websites are probably built in plain HTML (and CSS and Javascript, but we're not really talking about those here). There's nothing wrong with those technologies – they'll get you pretty far! But as your projects grow, you tend to run into two problems:</p>
<ul>
<li><strong>Sites with lots of content become unwiedly.</strong> Let's say you're building a site with a hundred articles, or a thousand archival records, or ten thousand of something else: it's not that you <em>couldn't</em> write out HTML for all of that content, but it would be pretty tedious. And if you wanted to change anything about the markup after the fact, that would be a time-consuming task.</li>
<li><strong>Other people need to edit content on your website.</strong> You could teach them all to write HTML, but that's not always an option: Maybe they work in a different department, or they'll want to work on the site long after you've moved onto the next project and are able to help out. Also, HTML might not be the best place to work on content: a writer might prefer to work in Google Docs or some other writing app, but there's no easy way to wrangle that back into a HTML file.</li>
</ul>
<p>You'll encounter other issues when scaling up your web projects, but many of them can be traced back to one of these two.</p>
<h2 id="the-solution">The solution</h2>
<p>The solution to introduce a level of abstraction. Specifically, we're going to abstract our content away from our markup (ie. our HTML), so we can work on each separately. We do that in three steps:</p>
<ol>
<li>Take all the content (like text and images) out of our HTML file and put them into a separate datastore.</li>
<li>Write templates that look more or less like HTML but have special placeholders where our content used to be.</li>
<li>Set up a piece of software that takes our content and our templates and combines them back into regular HTML - because that's the only thing browsers understand.</li>
</ol>
<p>The combination of one, two, or all three of these things is called a CMS (Content Management System). Some CMS have even more features, like an interface to let you edit content in the datastore, or template customisation, or analytics, or webhosting – but the big, central idea is abstracting content from markup.</p>
<p>The best way to understand this idea is to look at an example. I'll leave out most of the technical details for now - we'll deal with them in the second part: <em>Content Management Systems in the Real World</em>.</p>
<h2 id="an-example">An example</h2>
<p>Let's say we have a website called <em>Max's recipe box</em> that lists a bunch of recipes and how long they take to cook. The site works great, but we've run into the two problems we mentioned in the beginning: We're adding lots of recipes, so the HTML file is becoming unwieldy. Also, our friend Alice wants to contribute to the site, but she doesn't want to edit HTML files. We've decided to address these problems by getting the site onto a CMS. How do we go about that?</p>
<p>At the moment, our HTML file looks like this:</p>
<pre class="language-html"><code class="language-html"><span class="token tag"><span class="token tag"><span class="token punctuation"><</span>h1</span><span class="token punctuation">></span></span>Max's recipe box<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>h1</span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"><</span>ul</span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"><</span>li</span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"><</span>h2</span><span class="token punctuation">></span></span>Mushroom pizza<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>h2</span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"><</span>span</span><span class="token punctuation">></span></span>Duration: 0:45<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>span</span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>li</span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"><</span>li</span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"><</span>h2</span><span class="token punctuation">></span></span>Pumpkin soup<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>h2</span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"><</span>span</span><span class="token punctuation">></span></span>Duration: 0:45<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>span</span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>li</span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"><</span>li</span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"><</span>h2</span><span class="token punctuation">></span></span>Apple pie<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>h2</span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"><</span>span</span><span class="token punctuation">></span></span>Duration: 0:45<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>span</span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>li</span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>ul</span><span class="token punctuation">></span></span></code></pre>
<p>Let's start by extracting the title of our site (<code>Max's recipe box</code>) into a datastore - in this case we'll use a text file:</p>
<pre class="language-csv"><code class="language-csv"><span class="token value">site_title</span>
<span class="token value">Max's recipe box</span></code></pre>
<p>We have to label the piece of data we extracted so we can reference it later. I came up with <code>site_title</code>, but anything that makes sense in your mind will work.</p>
<p>Then, we put a placeholder where that piece of content used to be in our HTML. We'll use a templating language called Liquid for these examples, which uses <code>{{ double curly braces }}</code> to mark placeholders - other languages have different conventions. Our file now looks like this:</p>
<pre class="language-diff-html"><code class="language-diff-html"><span class="token deleted-sign deleted language-html"><span class="token prefix deleted">-</span><span class="token tag"><span class="token tag"><span class="token punctuation"><</span>h1</span><span class="token punctuation">></span></span>Max's Recipe Box<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>h1</span><span class="token punctuation">></span></span>
</span><span class="token inserted-sign inserted language-html"><span class="token prefix inserted">+</span><span class="token tag"><span class="token tag"><span class="token punctuation"><</span>h1</span><span class="token punctuation">></span></span>{{site_title}}<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>h1</span><span class="token punctuation">></span></span>
</span><span class="token unchanged language-html"><span class="token prefix unchanged"> </span><span class="token tag"><span class="token tag"><span class="token punctuation"><</span>ul</span><span class="token punctuation">></span></span>
<span class="token prefix unchanged"> </span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>li</span><span class="token punctuation">></span></span>
<span class="token prefix unchanged"> </span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>h2</span><span class="token punctuation">></span></span>Mushroom pizza<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>h2</span><span class="token punctuation">></span></span>
<span class="token prefix unchanged"> </span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>span</span><span class="token punctuation">></span></span>Duration: 0:45<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>span</span><span class="token punctuation">></span></span>
<span class="token prefix unchanged"> </span> <span class="token tag"><span class="token tag"><span class="token punctuation"></</span>li</span><span class="token punctuation">></span></span>
<span class="token prefix unchanged"> </span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>li</span><span class="token punctuation">></span></span>
<span class="token prefix unchanged"> </span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>h2</span><span class="token punctuation">></span></span>Pumpkin soup<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>h2</span><span class="token punctuation">></span></span>
<span class="token prefix unchanged"> </span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>span</span><span class="token punctuation">></span></span>Duration: 0:45<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>span</span><span class="token punctuation">></span></span>
<span class="token prefix unchanged"> </span> <span class="token tag"><span class="token tag"><span class="token punctuation"></</span>li</span><span class="token punctuation">></span></span>
<span class="token prefix unchanged"> </span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>li</span><span class="token punctuation">></span></span>
<span class="token prefix unchanged"> </span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>h2</span><span class="token punctuation">></span></span>Apple pie<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>h2</span><span class="token punctuation">></span></span>
<span class="token prefix unchanged"> </span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>span</span><span class="token punctuation">></span></span>Duration: 0:45<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>span</span><span class="token punctuation">></span></span>
<span class="token prefix unchanged"> </span> <span class="token tag"><span class="token tag"><span class="token punctuation"></</span>li</span><span class="token punctuation">></span></span>
<span class="token prefix unchanged"> </span><span class="token tag"><span class="token tag"><span class="token punctuation"></</span>ul</span><span class="token punctuation">></span></span>
</span></code></pre>
<p>Note that we're using the label from our datastore (<code>site_title</code>) to refer to the piece of content we just extracted. The addition of that placeholder turns our HTML file into a <em>template</em>.</p>
<p>Our new setup is already useful: If Alice wanted to change the title of the site, she wouldn't have to touch any HTML - all she would have to edit is that little text file.</p>
<p>Now, let's do the same with the list of recipes. We start by pulling the titles and durations into another text file:</p>
<pre class="language-csv"><code class="language-csv"><span class="token value">title</span><span class="token punctuation">,</span><span class="token value"> duration</span>
<span class="token value">Mushroom pizza</span><span class="token punctuation">,</span><span class="token value"> 0:45</span>
<span class="token value">Pumpkin Soup</span><span class="token punctuation">,</span><span class="token value"> 1:20</span>
<span class="token value">Apple Pie</span><span class="token punctuation">,</span><span class="token value"> 2:00</span></code></pre>
<p>Again, we're using the first line of our file to label our data: <code>title</code> and <code>duration</code>. Every line after that represents an individual recipe, each with the actual title and duration. This way of organising a text file is called CSV (Comma-Separated Values), and when you squint at it you'll see that it works like a spreadsheet: the first line of the file lists the column titles, then the data follows row after row. You can actually export CSVs from most spreadsheet software, which can be pretty handy.</p>
<p>With our data extracted and organised, we can replace the recipe list in our template with more placeholders:</p>
<pre class="language-diff-liquid"><code class="language-diff-liquid"><span class="token unchanged language-liquid"><span class="token prefix unchanged"> </span><span class="token tag"><span class="token tag"><span class="token punctuation"><</span>h1</span><span class="token punctuation">></span></span><span class="token liquid language-liquid"><span class="token delimiter punctuation">{{</span>siteTitle<span class="token delimiter punctuation">}}</span></span><span class="token tag"><span class="token tag"><span class="token punctuation"></</span>h1</span><span class="token punctuation">></span></span>
</span><span class="token inserted-sign inserted language-liquid"><span class="token prefix inserted">+</span><span class="token tag"><span class="token tag"><span class="token punctuation"><</span>ul</span><span class="token punctuation">></span></span>
<span class="token prefix inserted">+</span> <span class="token liquid language-liquid"><span class="token delimiter punctuation">{%</span> <span class="token keyword">for</span> recipe <span class="token keyword">in</span> recipes <span class="token delimiter punctuation">%}</span></span>
<span class="token prefix inserted">+</span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>li</span><span class="token punctuation">></span></span>
<span class="token prefix inserted">+</span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>h2</span><span class="token punctuation">></span></span><span class="token liquid language-liquid"><span class="token delimiter punctuation">{{</span>recipe<span class="token punctuation">.</span>title<span class="token delimiter punctuation">}}</span></span><span class="token tag"><span class="token tag"><span class="token punctuation"></</span>h2</span><span class="token punctuation">></span></span>
<span class="token prefix inserted">+</span> <span class="token tag"><span class="token tag"><span class="token punctuation"><</span>span</span><span class="token punctuation">></span></span>Duration: <span class="token liquid language-liquid"><span class="token delimiter punctuation">{{</span>recipe<span class="token punctuation">.</span>duration<span class="token delimiter punctuation">}}</span></span><span class="token tag"><span class="token tag"><span class="token punctuation"></</span>span</span><span class="token punctuation">></span></span>
<span class="token prefix inserted">+</span> <span class="token tag"><span class="token tag"><span class="token punctuation"></</span>li</span><span class="token punctuation">></span></span>
<span class="token prefix inserted">+</span> <span class="token liquid language-liquid"><span class="token delimiter punctuation">{%</span> <span class="token keyword">endfor</span> <span class="token delimiter punctuation">%}</span></span>
<span class="token prefix inserted">+</span><span class="token tag"><span class="token tag"><span class="token punctuation"></</span>ul</span><span class="token punctuation">></span></span>
</span></code></pre>
<p>The line <code>{% for recipe in recipes %}</code> is telling the computer: <em>Hey! For every recipe in our datastore, repeat whatever markup follows until you see <code>{% endfor %}</code>.</em> Between those tags we use placeholders like <code>{{recipe.title}}</code> to display specific pieces of information for the current recipe. Liquid has many more constructs like this for dealing with data in smart ways – for example, we could output different HTML if a recipe has a particularly long title, or no title at all – but the principle is the same.</p>
<p>Moving our recipes into a datastore has the same benefit as extracting the title: If Alice wants to add a recipe to the list, she can just edit the CSV file. Even better, she could import that file into Google Sheets, invite other people, set up a whole editorial process for adding recipes – as long as she export a CSV file at the end, it wouldn't impact our workflow at all.</p>
<p>But we've also solved our second problem: The template doesn't care if our site has 5 or 5,000 recipes - it'll iterate through them and output the HTML just the same. If we need to change anything about the markup, we just edit the template and the computer does all the boring typing for us.</p>
<h2 id="demo">Demo</h2>
<p>The easiest way to get a feel for these concepts is to work with them directly. <a href="https://codepen.io/maxakohler/full/GRMKRKB">Here's a coding environment</a> with the three files we discussed: Two CSV files (the datastore) and a Liquid template. All three are editable. Press the button below to smush them into a HTML file, then change the data, the template, or both, and observe how the rendered HTML changes.</p>
<p>Don't worry about how exactly our data and templates are rendered into HTML in this demo (though feel free to look at the code if you're curious) – the goal for now is to get you comfortable with the principle of separating content from markup. Once you've achieved that, you're ready to tackle the tricky business of setting up your own content management system and using it for real-world projects.</p>
How to fix inconsistent vertical metrics in web fonts2022-02-19T00:00:00Zhttps://maxkohler.com/posts/2022-02-19-fixing-vertical-metrics/<p>Here's a really specific problem I've run into a handful of times building websites with custom fonts: I set some type on my Windows machine, and everything works as expected. But when I pull up the same page on a Mac (regardless of the browser) the line height is totally different.</p>
<p>My first thought was that something was wrong with my CSS – maybe there's a rogue <code>line-height</code> declaration that gets applied in one place and not the other? But it turned out the problem was actually the font itself: It had different vertical metrics for each platform.</p>
<h2 id="how-to-fix-the-problem">How to fix the problem</h2>
<p>The best option is to get whoever produced the font to re-export it with the correct metrics. This is especially true for commercial typefaces, which you're usually not allowed to modify. Failing that, you can generate new font files yourself in one of two ways:</p>
<h3 id="1.-fontsquirrel">1. FontSquirrel</h3>
<p>Upload the file to the <a href="https://www.fontsquirrel.com/tools/webfont-generator">FontSquirrel Webfont Generator</a>, switch to Expert mode, check "Auto-Adjust Vertical Metrics", and download the generated fonts. If you're lucky, this will repair the inconsistent metrics and your type will render correctly.</p>
<h3 id="2.-fonttools">2. Fonttools</h3>
<p>If this doesn't work, you can adjust the metrics manually using the command line and a text editor.</p>
<p>Install <a href="https://github.com/fonttools/fonttools#what-is-this">fonttools</a> and <a href="https://github.com/google/brotli">brotli</a> with <code>pip install fonttools brotli</code>. Then <code>cd</code> your way to your project folder and run <code>ttx borked-font.ttf</code>. This will convert the binary <code>ttf</code> into a human-readable XML file called <code>borked-font.ttx</code>.</p>
<p>Open the <code>ttx</code> file in your text editor and <em>look for problems</em>. Specifically, you want to ensure that:</p>
<ul>
<li><code>fsSelect</code> bit 7 (the <em>eighth</em> number in the sequence) is set to <code>1</code></li>
<li><code>sTypoAscender</code> is equal to <code>hheaAscender</code> (meaning the <code><ascent></code> key in the <code><hhea></code> table)</li>
<li><code>sTypoDescender</code> is equal to <code>hheaDescender</code></li>
<li><code>sTypoLinegap</code> is equal to <code>hheaLinegap</code></li>
<li><code>winAscent</code> is equal to the largest <code>ymax</code> value in the font</li>
<li><code>winDescent</code> is equal to the lowest <code>ymin</code> in the font <em>times -1</em></li>
</ul>
<p>When you're done, run <code>ttx --flavor woff borked-font.ttx</code> to convert it back into a <code>woff</code> file. Set <code>--flavor woff2</code> to compile straight to <code>woff2</code>, or drop the flag altogether to produce an uncompressed <code>ttf</code>. Load up the new file on your website, and see if you solved the problem.</p>
<h2 id="background">Background</h2>
<p>OpenType fonts are complicated pieces of software. In addition to the actual letterforms (stored as Bézier curves), they contain <em>tables</em> containing the data needed to map these outlines to unicode points and enable things like contextual alternates, kerning pairs, variable fonts, and whatever else you might want to do.</p>
<p>One of the things that's stored in these tables is the font's <em>vertical metrics</em>. This is a set of numbers that define the height of the ascenders, the depth of the descenders, and the recommended linespacing. Rendering engines use these numbers to calculate where the first baseline of a text should fall, what the distance between subsequent lines should be, and how much padding to apply below the last line. They're roughly equivalent to the space above and below the raised letterform on a metal sort.</p>
<figure class="post-figure thumbnail">
<img alt="Drawing of relief letter used in letterpress printing" loading="lazy" src="https://maxkohler.com/assets/type.png" />
<figcaption>
</figcaption>
</figure>
<p>For <a href="https://docs.microsoft.com/en-us/typography/opentype/">historical reasons</a>, vertical metrics are stored in <em>three</em> different places (called <code>hhea</code>, <code>OS/2 typo</code> and <code>OS/2 win</code>), and different rendering engines get their information from different ones. Apple devices generally use <code>hhea</code>, Windows uses either <code>OS/2 typo</code> or <code>OS/2 win</code>, and old versions of MS Office use <code>OS/2 win</code> exclusively. If the numbers in these tables aren't the same, you can end up in a situation where type renders differently in different browsers, design tools, or operating systems.</p>
<p>You can get out of that situation as a user by synching up the numbers yourself, like we did above. First, we set bit 7 in <code>fsSelect</code> to <code>1</code> to <a href="https://docs.microsoft.com/en-us/typography/opentype/spec/os2#fsselection">activate a setting</a> called <code>USE_TYPO_METRICS</code>. This tells browsers on Windows to use the values in <code>OS/2 typo</code> rather than <code>OS/2 win</code>. Then we synched up the values in <code>hhea</code> and <code>OS/2 typo</code> and set <code>OS/2 Win</code> to match the tallest ascender and deppest descender in the font to avoid clipping. Finally we recompiled the font with the new metrics, hopefully solving our issue. There are other approaches to settings vertical metrics, but this is the one <a href="https://glyphsapp.com/learn/vertical-metrics#g-the-webfontstrategy-2019">recommended by Glyphs</a> and the <a href="https://github.com/googlefonts/gf-docs/tree/main/VerticalMetrics#vertical-metrics">Google Fonts Team</a>.</p>
<p>If you're a type designer, you can avoid the problem altogether by setting the metrics correctly as you design the typeface, and using <a href="https://github.com/googlefonts/fontbakery">automated testing</a> to catch inconsistencies in your build process.</p>
<h2 id="notes">Notes</h2>
<ul>
<li>Thanks to <a href="https://stackoverflow.com/questions/10044130/custom-fonts-with-different-vertical-metrics-between-oss">FontSquirrel and Neil on Stackoverflow</a>, who sent me down this rabbit hole.</li>
<li>In case I ever need it, here is the <a href="https://docs.microsoft.com/en-us/typography/opentype/spec/hhea">OpenType Spec</a>.</li>
</ul>
Everything I know about alt text2022-02-25T00:00:00Zhttps://maxkohler.com/posts/everything-i-know-about-alt-text/<h2 id="what-is-alt-text%3F">What is alt text?</h2>
<p>"Alt text" is short for "alternative text". It's a short piece of text that's used when the image itself isn't available because someone is using a text-only or audio version of your website, they turned off images to save bandwidth, or the network request failed. Alt text also makes your images more readable for machines, both your own and those <a href="https://developers.google.com/search/docs/advanced/guidelines/google-images?hl=en#use-descriptive-alt-text">built by others</a>.</p>
<p>It's different to an image caption, which provides <em>additional</em> context to an image and is visible to everyone, or an extended description.</p>
<h2 id="why-should-you-use-alt-text%3F">Why should you use alt text?</h2>
<p>Alt text is a straightforward way to give more people access to your content. This includes people who are blind or have low vision and rely on screen readers and other assistive technology, but also people who are cooking, driving, on a slow internet connection, or in some other situation where an audio or text-only version of your website is just more convenient.</p>
<p>If your organisation takes public money, you're probably required to provide alt text by your country's accessibility laws. In the U.S. the relevant standard is <a href="https://www.access-board.gov/ict/">Section 508</a> of the Rehabilitation Act and the <a href="https://beta.ada.gov/">Americans with Disabilities Act (ADA)</a><sup class="footnote-ref"><a href="https://maxkohler.com/posts/everything-i-know-about-alt-text/#fn1" id="fnref1">1</a></sup>, in Britain it's the <a href="https://www.gov.uk/guidance/accessibility-requirements-for-public-sector-websites-and-apps#meeting-accessibility-requirements">Public Sector Bodies (Websites and Mobile Applications) (No. 2) Accessibility Regulations 2018</a>, and European member states all have local laws implementing a directive called <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016L2102">EU 2016/2102</a>. Thankfully, all of these laws refer to a common standard called the <a href="https://www.w3.org/TR/UNDERSTANDING-WCAG20/">Web Content Accessibility Guidelines (WCAG)</a>, which requires that <a href="https://www.w3.org/TR/UNDERSTANDING-WCAG20/text-equiv.html">"all non-text content is also available in text"</a> (Success Criterion 1.1)</p>
<h2 id="how-do-you-write-good-alt-text%3F">How do you write good alt text?</h2>
<h3 id="content">Content</h3>
<p>Alt text should give people the same information as the image it replaces. This means you need to ask what the <em>purpose</em> of the image is in the context it's in and write different alt text depending on the answer.</p>
<p>Let's say your website has a phone number with an icon of a phone next to it. The purpose of that icon is just to indicate that what follows is, in fact, a phone number – so the alt text should do the same. The word "Phone" would be enough to do that. Any other details would be distracting.</p>
<p>If the point of an image is to show what something looks like, the alt text should be a visual description of that thing, focusing on the important details. If the image is a photo of a boat you're trying to sell, you should focus on the boat rather than the landscape in the background. However, if you used the same image as an example of your landscape photography, describing the background, the light, and the overall composition would make sense.</p>
<p>People have come up with more <a href="https://www.w3.org/WAI/tutorials/images/">formal categories of images</a> and rules for writing alt texts for each. These can be useful, but ultimately these are editorial decisions you need to make using your own judgement.</p>
<h3 id="style">Style</h3>
<p>I think it's helpful to remind yourself that writing alt text is still <em>writing</em>. It's not fundamentally different from any other writing you do on your website. This means you can use everything you know about your audience, structure, tone, editing and so on, and be as nuanced and expressive as you are in other contexts. If you think of it as a literary endeavour rather than a technical chore, writing alt text can be fun and you'll likely produce better results.</p>
<p>This being said, some basic style tips are generally accepted:</p>
<ul>
<li>Write in the simple present.</li>
<li>Aim for a length of 15–20 words or less.</li>
<li>Don't repeat information that's already present in the image caption or elsewhere on the page.</li>
<li>If the image contains important text, transcribe it in full.</li>
<li>Don't use <span class="small-caps">all-caps</span> for emphasis – some screen readers will read each letter separately, which would be frustrating.</li>
<li>Don't say it's an image – <a href="https://axesslab.com/alt-texts/#dont-say-its-an-image">screen readers will add that information themselves</a>.</li>
</ul>
<h2 id="how-do-you-add-alt-text%3F">How do you add alt text?</h2>
<p>It depends on your situation. If you're working with HTML, you write the alt text right into your markup using the <code>alt</code> attribute:</p>
<pre class="language-html"><code class="language-html"><span class="token tag"><span class="token tag"><span class="token punctuation"><</span>img</span> <span class="token attr-name">alt</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>Charcoal drawing of apples on checked blanket<span class="token punctuation">"</span></span> <span class="token attr-name">src</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>apples.jpg<span class="token punctuation">"</span></span> <span class="token punctuation">/></span></span></code></pre>
<p>Inline SVGs don't support the <code>alt</code>-attribute, <a href="https://axesslab.com/alt-texts/#svg">but you can use</a> <code>role="img"</code> and <code>aria-label</code> instead:</p>
<pre class="language-html"><code class="language-html"><span class="token tag"><span class="token tag"><span class="token punctuation"><</span>svg</span> <span class="token attr-name">role</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>img<span class="token punctuation">"</span></span> <span class="token attr-name">aria-label</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>Diagram of an internal combustion engine<span class="token punctuation">"</span></span> <span class="token attr-name">viewBox</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>…<span class="token punctuation">"</span></span><span class="token punctuation">></span></span>…<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>svg</span><span class="token punctuation">></span></span></code></pre>
<p>If you're not editing your site's HTML directly, you need to figure out how to add alt text through your content management system. Most popular ones have built-in tools to do it:</p>
<ul>
<li><a href="https://help.twitter.com/en/using-twitter/picture-descriptions">Twitter</a></li>
<li><a href="https://help.instagram.com/503708446705527">Instagram</a></li>
<li><a href="https://www.facebook.com/help/214124458607871">Facebook</a></li>
<li><a href="https://make.wordpress.org/accessibility/handbook/content/alternative-text-for-images/#visual-example">Wordpress</a></li>
<li><a href="https://help.medium.com/hc/en-us/articles/215679797-Images">Medium</a></li>
<li><a href="https://support.substack.com/hc/en-us/articles/4414829453204-How-can-I-edit-images-on-a-Substack-post-">Substack</a></li>
<li><a href="https://brownandtrans.tumblr.com/post/613978932163772416/how-to-write-alt-text-and-image-descriptions-for">Tumblr</a> (only in iOS and Android apps)</li>
<li><a href="https://support.squarespace.com/hc/en-us/articles/206542357-Adding-alt-text-to-images">Squarespace</a></li>
</ul>
<p>If your CMS doesn't support alt text, you can work around the problem by adding captions or describing the image in the main text.</p>
<h2 id="can-you-automate-this%3F">Can you automate this?</h2>
<p>Some people do generate alt text automatically when no hand-written text is available, notably <a href="https://www.facebook.com/help/216219865403298">Facebook and its properties</a> and <a href="https://www.theverge.com/2022/3/18/22984474/microsoft-edge-automatic-image-labels-accessibility-feature">Microsoft Edge</a>.</p>
<p>The problem with these systems is that they have no way of knowing what you were trying to communicate with a particular image. They just produce a general, more <a href="https://cripritual.com/haagaard/">or less accurate</a> description of it, which isn't always what your readers need (<a href="https://maxkohler.com/posts/everything-i-know-about-alt-text/#how-do-you-write-good-alt-text%3F">see above</a>). Still, in most situations it's probably better than no description at all.</p>
<h2 id="further-reading">Further reading</h2>
<ul>
<li><a href="https://www.nytimes.com/interactive/2022/02/18/arts/alt-text-images-descriptions.html">The Hidden Image Descriptions Making the Internet Accessible</a>. A really well-produced introduction to alt text from both technical and cultural perspectives by Meg Miller and Ilaria Parogni in the New York Times.</li>
<li><a href="https://www.cooperhewitt.org/cooper-hewitt-guidelines-for-image-description/">Cooper Hewitt Guidelines for Image Description</a>. This is the most detailed guide I've found on writing good image descriptions, captions, and alt text. It has particularly thoughtful guidelines on describing people.</li>
<li>The idea to frame alt text as a literary endeavour comes from a project called <a href="https://alt-text-as-poetry.net/">Alt text is poetry</a> by the artists Bojana Coklyat and Shannon Finnegan. Their website is like a breath of fresh air when you've been knee-deep in the WCAG spec all day.</li>
</ul>
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>The U.S Department of Justice has some really well-written guidance on how the <a href="https://beta.ada.gov/web-guidance/">ADA relates to web accessibilty</a>. <a href="https://maxkohler.com/posts/everything-i-know-about-alt-text/#fnref1" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
Add-to-calendar links2022-03-06T00:00:00Zhttps://maxkohler.com/posts/calendar-links/<p>When you're building a website for a timed event like a talk or a workshop, you want to make it <em>really</em> easy for people to add your event to their own calendar. I suspect once you get someone to do that, there's a pretty high chance they'll actually come to your event - which is why you're building the site in the first place.</p>
<p>One way to do this is an add-to-calendar button. When people click it, it opens the "Add an Event" screen of their calendar app with all the event information already filled in, so all they need to do is hit "save". It doesn't replace showing the event information visually on your website, but it's a nice enhancement.</p>
<p>Here's the interaction I'm talking about:</p>
<figure class="post-figure ">
<img alt="Add Event screen in Google Calendar" loading="lazy" src="https://maxkohler.com/assets/google-calendar.gif" />
<figcaption>
<span class="figure__caption">
</span>
</figcaption>
</figure>
<p>Different calendar apps have different ways of doing this (some use special links with URL parameters, others need an <code>ICS</code> file) and support different sets of event data, so you'll have to compromise and probably show a few buttons at once. Still, the cost of doing this is low because it all happens in HTML – no Javascript or other dependencies required.</p>
<h2 id="google-calendar">Google calendar</h2>
<p>Google has no official documentation on this (although they <a href="https://web.archive.org/web/20120225150257/http://www.google.com/googlecalendar/event_publisher_guide_detail.html">used to</a>), but add-to-calendar links work and support a surprising number of features.</p>
<p>The base URL is <code>calendar.google.com/calendar/render?action=TEMPLATE</code> followed by a bunch of query parameters containing your event data.</p>
<h3 id="parameters">Parameters</h3>
<ul>
<li><code>text</code> (required) – Title</li>
<li><code>details</code> - Description. Basic HTML is supported.</li>
<li><code>location</code> – Location.</li>
<li><code>dates</code> (required) – Start and end dates/times in UTC format (<code>YYYYMMDDThhmmssZ</code>), separated by <code>/</code>. Omit the times for all-day events. All dates are in GMT by default. Omit the trailing <code>Z</code> to use the user's local timezone, or use the <code>ctz</code> parameter to specify a custom timezone.</li>
<li><code>ctz</code> – Custom timezone from the <a href="https://en.wikipedia.org/wiki/List_of_tz_database_time_zones">tz database</a>, for example: <code>America/New_York</code></li>
<li><code>recur</code> – Specify a recurring event with an <a href="https://icalendar.org/iCalendar-RFC-5545/3-8-5-3-recurrence-rule.html">RFC-5545 RRULE</a> string. Example: <code>recur=RRULE:FREQ=DAILY;INTERVAL=3</code>. There's also an online <a href="https://icalendar.org/rrule-tool.html">generator to make those strings</a>.</li>
<li><code>crm</code> – Show as available/busy. Possible values are <code>AVAILABLE</code>, <code>BUSY</code>, and <code>BLOCKING</code>.</li>
<li><code>add</code> – Semicolon-separated list of email adresses to add as event guests. If you set this parameter, it'll also add the user clicking the button as an event organiser.</li>
</ul>
<h2 id="office-365-%2B-outlook-live">Office 365 + Outlook Live</h2>
<p>No documentation from Microsoft either, but a company called the Interactive Design Foundation put together <a href="https://github.com/InteractionDesignFoundation/add-event-to-calendar-docs/tree/main/services">this document</a> with a bunch of information.</p>
<p>Office 365 and Outlook live use the same query parameters, but different base URLs:</p>
<ul>
<li>Outlook Live: <code>outlook.live.com/calendar/0/deeplink/compose?path=path=/calendar/action/compose&rru=addevent</code></li>
<li>Office 365: <code>outlook.office.com/calendar/0/deeplink/compose?path=path=/calendar/action/compose&rru=addevent</code></li>
</ul>
<h3 id="parameters-1">Parameters</h3>
<ul>
<li><code>subject</code> (required) – Title</li>
<li><code>body</code> – Description of the event</li>
<li><code>location</code> – Location</li>
<li><code>startdt</code> (required) – Start date/time in ISO 8601 format (<code>YYYY-MM-DDTHH:mm:SSZ</code>). Omit the time for all-day events. All dates are in UTC by default. Omit the trailing <code>Z</code> to use the user's local timezone. To specify all-day events use the YYYY-MM-DD format.</li>
<li><code>enddt</code> (required) – End date/time in ISO 8601 format (<code>YYYY-MM-DDTHH:mm:SSZ</code>). Omit the time for all-day events.</li>
<li><code>allday</code> – Is this an all-day event? boolean (true/false)</li>
<li><code>to</code> – Comma-separated list of emails of required attendees.</li>
<li><code>cc</code> – Comma-separated list of emails of optional attendees</li>
</ul>
<h2 id="microsoft-teams">Microsoft Teams</h2>
<p>This one actually has <a href="https://docs.microsoft.com/en-us/microsoftteams/platform/concepts/build-and-test/deep-links#deep-linking-to-the-scheduling-dialog">official documentation</a>, but I can't for the life of me get it to work. I get the sense from the documentation that maybe it's only designed to work from <em>within</em> a text chat on Teams? But it might be a configuration issue on my end, too.</p>
<h2 id="ics">ICS</h2>
<p>Most other calendar apps (like the Mac OS calendar and the Windows calendar app) support events in a file format called <code>ICS</code>. They look like this:</p>
<pre class="language-yaml"><code class="language-yaml"><span class="token key atrule">BEGIN</span><span class="token punctuation">:</span> VCALENDAR
<span class="token key atrule">VERSION</span><span class="token punctuation">:</span> <span class="token number">2.0</span>
<span class="token key atrule">BEGIN</span><span class="token punctuation">:</span> VEVENT
<span class="token key atrule">DTSTAMP</span><span class="token punctuation">:</span> 20220714T170000Z
<span class="token key atrule">DTSTART</span><span class="token punctuation">:</span> 20220714T170000Z
<span class="token key atrule">DTEND</span><span class="token punctuation">:</span> 20220714T190000Z
<span class="token key atrule">DESCRIPTION</span><span class="token punctuation">:</span> Description
<span class="token key atrule">SUMMARY</span><span class="token punctuation">:</span> Title
<span class="token key atrule">LOCATION</span><span class="token punctuation">:</span> Location
<span class="token key atrule">END</span><span class="token punctuation">:</span> VEVENT
<span class="token key atrule">END</span><span class="token punctuation">:</span> VCALENDAR</code></pre>
<p>The lines between <code>BEGIN: VEVENT</code> and <code>END: VEVENT</code> contain your event data. ICS has <em>a lot</em> of features, but the most useful ones for our scenario are:</p>
<ul>
<li><code>SUMMARY</code> – Title</li>
<li><code>DESCRIPTION</code> – Description</li>
<li><code>LOCATION</code> – Location</li>
<li><code>DTSTART</code> – (required) Start date in the <code>YYYYMMDDThhmmssZ</code> format. All dates are in UTC by default, prepend a timezone name and omit the trailing <code>Z</code> to specify a local timezone: <code>TZID=America/New_York:20220119T143000</code>.</li>
<li><code>DTEND</code> - (required) End date in the <code>YYYYMMDDThhmmssZ</code> format</li>
<li><code>DTSTAMP</code> – (required) Calendar apps can use this parameter to <a href="https://datatracker.ietf.org/doc/html/rfc5545#section-3.8.7.2">resolve conflicting events</a>. In our scenario, setting it to the same value as DTSTART seems to be enough.</li>
</ul>
<p>You could make an <code>ICS</code> file and point a link at it, but they're small enough you can write them into a data URL:</p>
<pre class="language-html"><code class="language-html"><span class="token tag"><span class="token tag"><span class="token punctuation"><</span>a</span>
<span class="token attr-name">href</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>data:text/calendar;charset=utf-8,BEGIN:VCALENDAR%0D%0AVERSION:2.0%0D%0ABEGIN:VEVENT%0D%0ADTSTAMP:20220714T170000Z%0D%0ADTSTART:20220714T170000Z%0D%0ADTEND:20220714T190000Z%0D%0ADESCRIPTION:The event description%0D%0ASUMMARY:The event title%0D%0ALOCATION:Location%0D%0ASTATUS:CONFIRMED%0D%0ASEQUENCE:0%0D%0AEND:VEVENT%0D%0AEND:VCALENDAR<span class="token punctuation">"</span></span>
<span class="token punctuation">></span></span>Download ICS<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>a</span>
<span class="token punctuation">></span></span></code></pre>
<h2 id="demo">Demo</h2>
<p><a href="https://codepen.io/maxakohler/full/podYgQB" class="button">View on Codepen</a></p>
<h2 id="notes">Notes</h2>
<ul>
<li>This is probably one of the rare cases where forcing the link to open in a new tab by adding <code>target="_blank"</code> is a good idea.</li>
<li>I got the idea for writing this down from a project called <a href="https://github.com/jekuer/add-to-calendar-button">add-to-calendar-button</a> by <a href="https://jenskuerschner.de/">Jens Kuerschner</a></li>
<li>A lot of the query parameters here come from <a href="https://github.com/InteractionDesignFoundation/add-event-to-calendar-docs/tree/main/services">some documentation</a> put together by a company called the Interactive Design Foundation.</li>
<li>In case I need it, the <a href="https://datatracker.ietf.org/doc/html/rfc5545#section-3.8.2.7">RFC5545 Spec</a></li>
</ul>
Why do all NFTs look the same?2022-03-18T00:00:00Zhttps://maxkohler.com/posts/why-do-all-nfts-look-the-same/<p><span class="leadin">A figure pulled</span> from <em><a href="https://opensea.io/collection/meebits">Minecraft</a></em>, <em><a href="https://opensea.io/collection/dourdarcels">Minions</a></em>, <em><a href="https://opensea.io/collection/clonex#">Fortnite</a></em>, <em><a href="https://opensea.io/collection/degentoonz-collection">Looney Tunes</a></em>, <a href="https://opensea.io/collection/cryptocoven">DeviantArt portraiture</a>, the <em><a href="https://opensea.io/collection/mfers">Are ya winning son</a></em> meme, <a href="https://www.wired.co.uk/article/corporate-memphis-design-tech">corporate illustration</a>, or some other slice of the American vernacular is shown in three-quarter portrait on a field of bright colour, looking indifferent. The figure is decorated with references to popular entertainment (<em>Harley Quinn</em>’s baseball bat, the three-eyed fish from <em>The Simpsons</em>, the floating orb of water from <em>The Last Airbender</em>, the face mask from <em>Mad Max</em>, <em>Yugi-Yo</em>’s hair, <em>Venom</em>’s teeth), popular consumption (bum bags, hoodies, puffer jackets, sneakers, vapes, tinted glasses, branded headphones, gold jewellery, takeout food containers), and crypto-specific symbols (laser eyes, diamonds, and currency icons).</p>
<p>The image is either a vector drawing or 3d-rendered, but in either case there is little suggestion of depth; every element is evenly lit and depicted in sharp detail as if it were pressed right against the image surface. There is a love for visual detail: every hair is precisely delineated, every piece of gold is polished, and cloth is carefully draped, lasers glow.</p>
<p>This description covers most non-fungible tokens at the <a href="https://opensea.io/explore-collections">top of OpenSea</a> (the biggest website for buying and selling NFTs) on any given day. Why do all these images look so alike?</p>
<hr />
<p>The first level of resemblence has to do with the fact that mainstream NFTs are generally produced by an operation called layering. You start by making a list of elements like “background”, “clothes”, and “hat”. Then you produce (or pay a gig worker to produce) a set of images corresponding to each element: A few different backgrounds, some variations of your character, and some different hats. Finally you use a simple computer program to stack these layers on top of each other in random combinations, producing a set of final images. The more elements and layers you have, the more images you can produce, and the bigger your payoff will be if the collection catches on.</p>
<p>The practice of assembling an image or other media object from a set of individual elements isn’t unique to NFTs. It has some parallels to collage, but the more apt comparison is compositing, a processes that happens everywhere in contemporary media production. The design of any movie, video game, or other “new media object”, writes the critic Lev Manovich,</p>
<blockquote>
<p>… begins with assembling a database of possible elements to be used … Throughout the design process, new elements are added to the database; existing elements are modified. The narrative is constructed by linking elements of this database in a particular order, that is by designing a trajectory leading from one element to another. The narrative is constructed by linking elements of this database in a particular order, that is by designing a trajectory leading from one element to another. On the material level, a narrative is just a set of links. <sup class="footnote-ref"><a href="https://maxkohler.com/posts/why-do-all-nfts-look-the-same/#fn1" id="fnref1">1</a></sup></p>
</blockquote>
<p>The most visible example of this is the film industry. When Tony Stark walks through a hangar in <em>Avengers: Endgame</em>, the footage of Robert Downy Jr, the 3d-model of the costume, the building, the HDRI sky, the aeroplanes in the background, and even the lens flares are only temporarily brought into the same frame — in reality they’re separate, independent assets (both in the media-industrial and financial sense of that term), ready to be re-assembled into other outputs down the line.</p>
<figure class="post-figure embed-container big">
<div class="embed-placeholder">
<p>
This page contains embedded content from <a href="https://youtube.com/">Youtube</a>, who might use cookies and other technologies to track you. To view this content, click <em>Allow Youtube content</em>.
</p>
<button class="embed-load button">Allow Youtube content</button>
</div>
<div class="embed" style="padding:50% 0 0 0;position:relative;">
<iframe data-src="https://www.youtube-nocookie.com/embed/UzT_SXG_bAI?controls=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" frameborder="0" allow="autoplay; fullscreen" allowfullscreen=""></iframe>
</div>
<figcaption>
<span class="figure-caption">
<p>VFX Breakdown for <em>Avengers: Endgame</em></p>
</span>
<span class="figure-source">
<p><a href="https://www.youtube.com/watch?v=UzT_SXG_bAI">Framestore</a></p>
</span>
</figcaption>
</figure>
<p>NFTs are a heightened, distorted version of this. Making a collection is mostly about filling the database; the linking happens almost incidentally, in the simplest way possible (randomised layering), fully automated in a few dozen lines of code.</p>
<p>If you want to make visually coherent images in this way you have to make the elements interchangeable; this limits the kinds of visual gestures you can make and, together with the fact that the same elements appear again and again, produces collections of very similar images. It also causes the flattened, occasionally shifting perspective and visible seams between elements that are characteristic of many NFT collections.</p>
<p>In both cases, the composite nature of the images isn’t a secret. Disney, who owns the <em>Avengers</em> assets, releases regular VFX breakdowns demonstrating the fact to their shareholders and everyone else, and OpenSea shows a list of the constituent elements next to every NFT for the same reason.</p>
<hr />
<p>The similarity between NFTs across the field (not just inside a given collection) is an extension of this logic. A prominent selling point of many collections isn’t so much that they’re assembled from a database, but that they might themselves <em>become</em> one; their images re-assembled into fresh media products like comic books, toys, TV shows, video games, experiences, and merchandise.</p>
<p>NFT projects aren’t generally prepared to do the work to actually make any of these things (it takes hundreds of artists <a href="https://www.latimes.com/style/la-xpm-2012-apr-20-la-fi-ct-visual-effects-workers-20120420-story.html">working 12-hour shifts</a> to turn a database like Disney’s into a movie like <em>Endgame</em>, not to mention the people sewing the merchandise or <a href="https://www.thenation.com/article/economy/disney-iger-labor/">staffing the theme parks</a>), but that doesn’t matter. The idea that such labour <em>could</em> be performed in the future, and that you would be able to pocket the surplus by owning a piece of the database is enough to sell it.</p>
<p>This piece of speculation starts out in roadmaps and other marketing material, but it quickly seeps down and across into the images themselves, where it crystallises into the visual cues we’re familiar with:</p>
<ul>
<li>Characters are popular because it’s easy to imagine how they might appear again and again in different media products, <em>just like Tony Stark</em>. Visual references to existing media properties (both in the choice of character and the accessories) are designed to reinforce this connection.</li>
<li>The plain backgrounds, walk cycles, soft lighting and neutral expressions reiterate the possibility that the figure is ready to be combined with <em>real</em> environments and <em>real</em> behaviours into a real product. (This is also what those gestures are designed to communicate in VFX breakdowns).</li>
<li>The depictions of real-world luxury products might be read as straightforward signifiers of value (<em>gold chains are valuable, therefore an image one is valuable as well</em>) or, maybe more aptly, as a promise to those who buy into the speculation — <em>you, too, <a href="https://www.urbandictionary.com/define.php?term=wagmi">are going to make it</a></em>.</li>
</ul>
<p>Incidentally, this also explains why there is such a big push to <a href="https://www.artsy.net/article/artsy-editorial-nft-profile-pics-appeal-collectors-artists-alike">enable NFTs as avatars</a> on social media sites like Twitter and Instagram. It’s a growth hack (people see your avatar, they buy an image from the same collection and set it as their avatar, more people see it, everyone profits), but more importantly it shores up the claim that NFTs can be composited into other media objects.</p>
<p>Of course, proponents are quick to point out that this is only proof-of-concept. The merchandise, video games and all the rest will be here any minute now.</p>
<figure class="post-figure big">
<img alt="Three silver coins show a woman's face. The image varies slightly." loading="lazy" src="https://maxkohler.com/assets/coins.png" />
<figcaption>
<span class="figure__caption">
<p>Ancient Greek coins showing Helios, the god of the Sun, c.a. 350 AD.</p>
</span>
<span class="figure__source">
<p>British Museum <a href="https://www.britishmuseum.org/collection/object/C_1949-0411-781">1949,0411.781</a>, <a href="https://www.britishmuseum.org/collection/object/C_1949-0411-775">1949,0411.775</a> , <a href="https://www.britishmuseum.org/collection/object/C_1955-1102-8">1955,1102.8</a>, all <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">CC BY-NC-SA 4.0</a></p>
</span>
</figcaption>
</figure>
<p>It seems likely that NFTs will implode for reasons external to them: The underlying currency might collapse, they might be regulated out of existence for the <a href="https://www.theverge.com/2021/3/15/22328203/nft-cryptoart-ethereum-blockchain-climate-change">environmental fallout</a> or the <a href="https://web3isgoinggreat.com/?tech=nft">widespread fraud</a> (or both), or they might just <a href="https://www.ft.com/content/46349496-790a-4223-8c65-d6a0bde897bc">run out of buyers</a>. But there is a deeper argument against them: There isn’t really such a thing as a non-fungible image.</p>
<p>“In principle”, the philospoher Walter Benjamin <a href="https://www.marxists.org/reference/subject/philosophy/works/ge/benjamin.htm">wrote in 1936</a>, “the work of art has always been reproducible”. <sup class="footnote-ref"><a href="https://maxkohler.com/posts/why-do-all-nfts-look-the-same/#fn2" id="fnref2">2</a></sup> People throughout history, he argued, made images in multiples: The ancient Greeks mass-produced pictures of their rulers by striking them into coins, which they spread across the continent. Around the third century, Chinese cloth-makers began to carve images into wooden blocks, which they covered in ink and pressed against silk, leaving a coloured impression that could be repeated over and over. The technique spread, and soon artists in every town were churning out thousands of playing cards, religious icons, portraits, and scenes from nature and everyday life. Woodblock printing was joined by copperplate engraving in the 15th, lithography in the 18th, and photography in the 19th century, each time increasing the veracity and speed with which images could be reproduced.</p>
<p>The current stage of this development is the digital image, where even the simple act of looking entails multiple acts of reproduction. When you open an image on your computer, it’s copied from your computer’s hard drive into its working memory, parsed and translated, until a specific array of pixels in your screen is lit up to render the image. As soon as you close the window, those pixels are turned off and the picture you were looking at is destroyed, only to be produced afresh the next time you open it. When the image is stored online (as NFTs typically are), this process happens everytime anyone looks at it.</p>
<p>Naturally each of those coins, bolts of silk, packs of playing cards, printed photos, and arrays of pixels on your screen are as “real” and “authentic” and “valuable” as all the rest of them — there isn’t really an “original” to speak of.</p>
<p>It takes an enormous amount of labour to suspend belief in this fact, even temporarily. In a classic essay, the art critic John Berger describes the lengths to which the National Gallery in London has to go to maintain the notion that their version of a painting by Leonardo is in fact “the original”:</p>
<blockquote>
<p>[The catalogue entry] on the “Virgin of the Rocks” is one of the longest entries. It consists of fourteen closely printed pages. They do not deal with the meaning of the image. The deal with who commissioned the painting, legal squabbles, who owned it, its likely date, the families of its owners. Behind this information lie years of research. The aim of the research is to prove beyond the shadow of a doubt that the painting is a genuine Leonardo. The secondary aim of the research is to prove that an almost identical painting in the Louvre is a replica of the National Gallery version. French art historians try to prove the opposite. <sup class="footnote-ref"><a href="https://maxkohler.com/posts/why-do-all-nfts-look-the-same/#fn3" id="fnref3">3</a></sup></p>
</blockquote>
<p>Similarly, Disney employs an army of copyright lawyers, image recognition software, lobbyists, a whole judicial apparatus to maintain the notion that the 3d-model of <em>Iron Man</em> in their database is in fact “the original”, and that everyone else only has a temporary viewing license.</p>
<p>NFTs are another attempt at this. People who make them recognise it’s difficult to argue that a digital image can be “original” on any material level, so they suggest a kind of authenticity-by-proxy: Buy an NFT and you get a unique entry in our special database <em>saying</em> you own the image. That database entry has effectively the same function as those fancy art historians and copyright lawyers: Establish authorship, keep track of provenance, authorise derivative works, mediate royalty payments, and so on.</p>
<p>Critics argue that this doesn’t work: There is no way of knowing, for instance, if someone who mints an NFT really made the image, and buying an NFT doesn’t <a href="https://techcrunch.com/2021/06/16/no-nfts-arent-copyrights/">really give you ownership</a> of the image in any legally recognised form.</p>
<p>They’re clearly right, but if there is anything to learn from the history of image-making, it’s the notion of the attributable, ownable, dateable “original” is itself pretty shaky. Images were always produced collectively and in abundance; the recent drive (historically speaking) to enclose them for individual profit must be overcome.<sup class="footnote-ref"><a href="https://maxkohler.com/posts/why-do-all-nfts-look-the-same/#fn4" id="fnref4">4</a></sup></p>
<figure class="post-figure thumbnail">
<img alt="Explosion diagram of a cartoon monkey's head. Skull, skin, fur and eyes are spread horizontally on white ground." loading="lazy" src="https://maxkohler.com/assets/monkey.jpg" />
<figcaption>
</figcaption>
</figure>
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>Lev Manovich (2001): <em>The Language of New Media</em>, p. 231 <a href="https://maxkohler.com/posts/why-do-all-nfts-look-the-same/#fnref1" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn2" class="footnote-item"><p>Walter Benjamin (1936): <em>The Work of Art in the age of Mechanical Reproduction</em>. Available at <a href="https://www.marxists.org/reference/subject/philosophy/works/ge/benjamin.htm">marxists.org/reference/subject/philosophy/works/ge/benjamin.htm</a> <a href="https://maxkohler.com/posts/why-do-all-nfts-look-the-same/#fnref2" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn3" class="footnote-item"><p>John Berger (1972): <em>Ways of Seeing</em>, p. 22. Available at <a href="https://archive.org/details/waysofseeing00berg/page/22/mode/2up">archive.org/details/waysofseeing00berg/page/22/mode/2up</a> <a href="https://maxkohler.com/posts/why-do-all-nfts-look-the-same/#fnref3" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn4" class="footnote-item"><p>This post also appeared <a href="https://maxakohler.medium.com/why-do-all-nfts-look-the-same-8e5da0cd0a1b">on Medium</a> <a href="https://maxkohler.com/posts/why-do-all-nfts-look-the-same/#fnref4" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
Per-file commit logs with Eleventy2022-03-21T00:00:00Zhttps://maxkohler.com/posts/per-file-commit-history-with-eleventy/<h2 id="background">Background</h2>
<p>Sometimes it's a good idea to publicly document how a website changes over time. I'm thinking of things like legal documents, technical writing, public policy, or any other piece of content you want to be extra transparent about.</p>
<p>If you're going to do this, you probably want to:</p>
<ul>
<li>Document <em>every</em> change (even minor ones), and have that documentation accurately reflect the changes you made.</li>
<li>Provide that documentation <em>in context</em>. If someone wants to trace changes to your privacy policy, that information should be right there with the original document. Don't make them go digging for it in an email or company blog.</li>
</ul>
<p>If your content is under version control you're already doing both of these things. Unless you go out of your way, you literally cannot change a file <em>without</em> creating a permanent record containing the diff, your name, the date, and a message describing the change. And git has extremely good built-in tools to query those records by file, date, author, and other contextual parameters.</p>
<p>Why not leverage that and generate changelogs for individual pages directly from the commit history?</p>
<h2 id="eleventy-%2B-simple-git">Eleventy + Simple Git</h2>
<p>We're going to use <a href="https://www.npmjs.com/package/simple-git">Simple Git</a> to read the commit history and make it available to templates using Eleventy's <a href="https://www.11ty.dev/docs/data-computed/">computed data</a> feature.</p>
<p>Let's assume we want to generate changelogs for Markdown files in a collection called <code>posts</code>. We start by creating a data file at <code>/posts/posts.11tydata.js</code>. Note that the filename must match the name of the collection.</p>
<pre class="language-diff"><code class="language-diff"><span class="token unchanged"><span class="token prefix unchanged"> </span>package.json
<span class="token prefix unchanged"> </span>.eleventy.js
<span class="token prefix unchanged"> </span>_includes/
<span class="token prefix unchanged"> </span>posts/
<span class="token prefix unchanged"> </span> one.md
<span class="token prefix unchanged"> </span> two.md
<span class="token prefix unchanged"> </span> three.md
</span><span class="token inserted-sign inserted"><span class="token prefix inserted">+</span> posts.11tydata.js
</span></code></pre>
<p>Creating our data file inside the <code>/posts</code> directory puts it at the end of Eleventy's <a href="https://www.11ty.dev/docs/data-cascade/">data cascade</a>, allowing us to read and write data for individual posts.</p>
<p>We start by <em>reading</em> <code>page.inputPath</code>, an <a href="https://www.11ty.dev/docs/data-eleventy-supplied/">auto-generated</a> property that contains the path to the Markdown file being processed. Then, we pass that information to <code>git.log()</code> to get that file's commit history, and <em>write</em> the result into the post's data object.</p>
<p><span class="code__title">posts.11tydata.js</span></p>
<pre class="language-js"><code class="language-js"><span class="token keyword">const</span> git <span class="token operator">=</span> <span class="token function">require</span><span class="token punctuation">(</span><span class="token string">'simple-git'</span><span class="token punctuation">)</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token keyword">async</span> <span class="token keyword">function</span> <span class="token function">getChanges</span><span class="token punctuation">(</span><span class="token parameter">data</span><span class="token punctuation">)</span> <span class="token punctuation">{</span>
<span class="token keyword">const</span> options <span class="token operator">=</span> <span class="token punctuation">{</span>
<span class="token literal-property property">file</span><span class="token operator">:</span> data<span class="token punctuation">.</span>page<span class="token punctuation">.</span>inputPath<span class="token punctuation">,</span>
<span class="token punctuation">}</span>
<span class="token keyword">try</span> <span class="token punctuation">{</span>
<span class="token keyword">const</span> history <span class="token operator">=</span> <span class="token keyword">await</span> git<span class="token punctuation">.</span><span class="token function">log</span><span class="token punctuation">(</span>options<span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token keyword">return</span> history<span class="token punctuation">.</span>all
<span class="token punctuation">}</span> <span class="token keyword">catch</span> <span class="token punctuation">(</span>e<span class="token punctuation">)</span> <span class="token punctuation">{</span>
<span class="token keyword">return</span> <span class="token keyword">null</span><span class="token punctuation">;</span>
<span class="token punctuation">}</span>
<span class="token punctuation">}</span>
module<span class="token punctuation">.</span>exports <span class="token operator">=</span> <span class="token punctuation">{</span>
<span class="token literal-property property">eleventyComputed</span><span class="token operator">:</span> <span class="token punctuation">{</span>
<span class="token function-variable function">changes</span><span class="token operator">:</span> <span class="token keyword">async</span> <span class="token parameter">data</span> <span class="token operator">=></span> <span class="token keyword">await</span> <span class="token function">getChanges</span><span class="token punctuation">(</span>data<span class="token punctuation">)</span>
<span class="token punctuation">}</span>
<span class="token punctuation">}</span></code></pre>
<p>When we run Eleventy now, the data object for each post contains a list of commits to the underlying Markdown file in reverse-chronological order:</p>
<pre class="language-diff-json"><code class="language-diff-json"><span class="token unchanged language-json"><span class="token prefix unchanged"> </span><span class="token punctuation">{</span>
<span class="token prefix unchanged"> </span> <span class="token property">"title"</span><span class="token operator">:</span> <span class="token string">"My Page Title"</span><span class="token punctuation">,</span>
</span><span class="token inserted-sign inserted language-json"><span class="token prefix inserted">+</span> <span class="token property">"changes"</span><span class="token operator">:</span> <span class="token punctuation">[</span>
<span class="token prefix inserted">+</span> <span class="token punctuation">{</span>
<span class="token prefix inserted">+</span> <span class="token property">"hash"</span><span class="token operator">:</span> <span class="token string">"0cd158fc81a4d3aefd52e6f416542d3549ef4b4e"</span><span class="token punctuation">,</span>
<span class="token prefix inserted">+</span> <span class="token property">"date"</span><span class="token operator">:</span> <span class="token string">"2022-03-19T22:46:53+01:00"</span><span class="token punctuation">,</span>
<span class="token prefix inserted">+</span> <span class="token property">"message"</span><span class="token operator">:</span> <span class="token string">"This is the latest commit"</span><span class="token punctuation">,</span>
<span class="token prefix inserted">+</span> <span class="token property">"refs"</span><span class="token operator">:</span> <span class="token string">"origin/master, origin/HEAD"</span><span class="token punctuation">,</span>
<span class="token prefix inserted">+</span> <span class="token property">"body"</span><span class="token operator">:</span> <span class="token string">""</span><span class="token punctuation">,</span>
<span class="token prefix inserted">+</span> <span class="token property">"author_name"</span><span class="token operator">:</span> <span class="token string">"Max Kohler"</span><span class="token punctuation">,</span>
<span class="token prefix inserted">+</span> <span class="token property">"author_email"</span><span class="token operator">:</span> <span class="token string">"hello@maxkohler.com"</span>
<span class="token prefix inserted">+</span> <span class="token punctuation">}</span><span class="token punctuation">,</span>
<span class="token prefix inserted">+</span> <span class="token punctuation">{</span>
<span class="token prefix inserted">+</span> <span class="token property">"hash"</span><span class="token operator">:</span> <span class="token string">"0cd158fc81a4d3aefd52e6f416542d3549ef4b4e"</span><span class="token punctuation">,</span>
<span class="token prefix inserted">+</span> <span class="token property">"date"</span><span class="token operator">:</span> <span class="token string">"2022-03-19T22:46:53+01:00"</span><span class="token punctuation">,</span>
<span class="token prefix inserted">+</span> <span class="token property">"message"</span><span class="token operator">:</span> <span class="token string">"This is another commit"</span><span class="token punctuation">,</span>
<span class="token prefix inserted">+</span> <span class="token property">"refs"</span><span class="token operator">:</span> <span class="token string">"origin/master, origin/HEAD"</span><span class="token punctuation">,</span>
<span class="token prefix inserted">+</span> <span class="token property">"body"</span><span class="token operator">:</span> <span class="token string">"This one has an extended description"</span><span class="token punctuation">,</span>
<span class="token prefix inserted">+</span> <span class="token property">"author_name"</span><span class="token operator">:</span><span class="token string">"Max Kohler"</span><span class="token punctuation">,</span>
<span class="token prefix inserted">+</span> <span class="token property">"author_email"</span><span class="token operator">:</span><span class="token string">"hello@maxkohler.com"</span>
<span class="token prefix inserted">+</span> <span class="token punctuation">}</span>
<span class="token prefix inserted">+</span> <span class="token punctuation">]</span>
</span><span class="token unchanged language-json"><span class="token prefix unchanged"> </span><span class="token punctuation">}</span>
</span></code></pre>
<p>We can now use whatever templating engine we want to render this data to the page. I happen to use Liquid, so I'd write something like:</p>
<p><span class="code__title">_includes/post.liquid</span></p>
<pre class="language-liquid"><code class="language-liquid"><span class="token liquid language-liquid"><span class="token delimiter punctuation">{%</span> <span class="token keyword">if</span> changes <span class="token delimiter punctuation">%}</span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"><</span>ul</span> <span class="token attr-name">class</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>changes<span class="token punctuation">"</span></span><span class="token punctuation">></span></span>
<span class="token liquid language-liquid"><span class="token delimiter punctuation">{%</span> <span class="token keyword">for</span> c <span class="token keyword">in</span> changes <span class="token delimiter punctuation">%}</span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"><</span>li</span> <span class="token attr-name">class</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>change<span class="token punctuation">"</span></span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"><</span>time</span> <span class="token attr-name">class</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>change__time<span class="token punctuation">"</span></span><span class="token punctuation">></span></span><span class="token liquid language-liquid"><span class="token delimiter punctuation">{{</span> c<span class="token punctuation">.</span><span class="token object">date</span> <span class="token delimiter punctuation">}}</span></span><span class="token tag"><span class="token tag"><span class="token punctuation"></</span>time</span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"><</span>h3</span> <span class="token attr-name">class</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>change__title<span class="token punctuation">"</span></span><span class="token punctuation">></span></span><span class="token liquid language-liquid"><span class="token delimiter punctuation">{{</span> c<span class="token punctuation">.</span>message <span class="token delimiter punctuation">}}</span></span><span class="token tag"><span class="token tag"><span class="token punctuation"></</span>h3</span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"><</span>span</span> <span class="token attr-name">class</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>change__hash<span class="token punctuation">"</span></span><span class="token punctuation">></span></span><span class="token liquid language-liquid"><span class="token delimiter punctuation">{{</span> c<span class="token punctuation">.</span>hash <span class="token delimiter punctuation">}}</span></span><span class="token tag"><span class="token tag"><span class="token punctuation"></</span>span</span><span class="token punctuation">></span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>li</span><span class="token punctuation">></span></span>
<span class="token liquid language-liquid"><span class="token delimiter punctuation">{%</span> <span class="token keyword">endfor</span> <span class="token delimiter punctuation">%}</span></span>
<span class="token tag"><span class="token tag"><span class="token punctuation"></</span>ul</span><span class="token punctuation">></span></span>
<span class="token liquid language-liquid"><span class="token delimiter punctuation">{%</span> <span class="token keyword">endif</span> <span class="token delimiter punctuation">%}</span></span></code></pre>
<h2 id="demo">Demo</h2>
<p>Here's the real, auto-generated changelog for this post using a slightly modified version of the code above:</p>
<ul class="changes">
<li class="change">
<time class="change__time">27/03/22, 18:33</time>
<a class="change__link" href="https://github.com/awesomephant/blog/commit/c4deb8f7a9e0fd714b3459607d62cf4164ae38f6">Fix typo</a>
</li>
<li class="change">
<time class="change__time">21/03/22, 20:41</time>
<a class="change__link" href="https://github.com/awesomephant/blog/commit/c7c1f4c689da3a76b75abf0729e8ec6c26b335fb">Fix date</a>
</li>
<li class="change">
<time class="change__time">21/03/22, 20:40</time>
<a class="change__link" href="https://github.com/awesomephant/blog/commit/833b861b3c7bc80e2e55a7fe0cf8e34345e035be">Remove unnecessary code snippet title</a>
</li>
<li class="change">
<time class="change__time">21/03/22, 20:39</time>
<a class="change__link" href="https://github.com/awesomephant/blog/commit/828e46d5f3b0f3cbd930f8fdd077efdf2a5ae097">Edit copy</a>
</li>
<li class="change">
<time class="change__time">21/03/22, 15:12</time>
<a class="change__link" href="https://github.com/awesomephant/blog/commit/9b650a337eec0ed2c62c9adfdfd42e6e12314689">Add Eleventy/Changelog post</a>
</li>
</ul>
<h2 id="notes">Notes</h2>
<ul>
<li>If you want to tweak which commits are returned by <code>git.log()</code>, it has <a href="https://github.com/steveukx/git-js#git-log">lots of options</a>.</li>
<li><code>git.log()</code> is an expensive operation. On my machine in a repository with about 1,000 commits it increases my average processing from 50 to 150ms. If you're going to do this, you might want to limit it to files where you actually want to show the changelog, or production builds, or both.</li>
<li>This solution only deals with linear history - one change after another. It would be interesting to try to visualise forks, branches, merges and everything else git can do, specifically in the context of writing. I remember reading a Hito Steyerl essay she described as <em>fork</em> another text - even if she was using the term somewhat metaphorically, I like the idea.</li>
</ul>
The simplest possible Wordpress footnotes2022-10-17T00:00:00Zhttps://maxkohler.com/posts/worlds-simplest-wordpress-footnotes/<p>The other day I was building a Wordpress website that needed footnotes on posts. Wordpress doesn't support that natively (there's an <a href="https://github.com/WordPress/gutenberg/issues/1890">issue about it</a> that's been open since 2017). My first thought was to use a plugin, but I couldn't find one that was maintained, had the right features, and didn't include all kinds of extra markup.</p>
<p>Here's what I want my footnotes to do:</p>
<ul>
<li>In the post editor, I want to write something like: <code>This is my senctence((and this goes into a footnote))</code></li>
<li>That markup should be replaced with a numbered anchor link</li>
<li>The content of the note should be rendered at the bottom of the post in an <code><ol></code></li>
</ul>
<pre class="language-php"><code class="language-php"><span class="token keyword">function</span> <span class="token function-definition function">mytheme_extract_footnotes</span><span class="token punctuation">(</span><span class="token variable">$content</span><span class="token punctuation">)</span>
<span class="token punctuation">{</span>
<span class="token variable">$footnotes</span> <span class="token operator">=</span> <span class="token keyword">array</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token variable">$context</span> <span class="token operator">=</span> <span class="token keyword">array</span><span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token variable">$pattern</span> <span class="token operator">=</span> <span class="token string double-quoted-string">"/(?:\(\()(.*)(?:\)\))/"</span><span class="token punctuation">;</span>
<span class="token function">preg_match_all</span><span class="token punctuation">(</span><span class="token variable">$pattern</span><span class="token punctuation">,</span> <span class="token variable">$content</span><span class="token punctuation">,</span> <span class="token variable">$footnotes</span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token variable">$content</span> <span class="token operator">=</span> <span class="token function">preg_replace_callback</span><span class="token punctuation">(</span><span class="token variable">$pattern</span><span class="token punctuation">,</span> <span class="token keyword">function</span> <span class="token punctuation">(</span><span class="token variable">$matches</span><span class="token punctuation">)</span> <span class="token punctuation">{</span>
<span class="token keyword">static</span> <span class="token variable">$fn_index</span> <span class="token operator">=</span> <span class="token number">0</span><span class="token punctuation">;</span>
<span class="token variable">$fn_index</span><span class="token operator">++</span><span class="token punctuation">;</span>
<span class="token keyword">return</span> <span class="token string single-quoted-string">'<a class="footnote__ref" href="#note-'</span> <span class="token operator">.</span> <span class="token variable">$fn_index</span> <span class="token operator">.</span> <span class="token string single-quoted-string">'">'</span> <span class="token operator">.</span> <span class="token variable">$fn_index</span> <span class="token operator">.</span> <span class="token string single-quoted-string">'</a>'</span><span class="token punctuation">;</span>
<span class="token punctuation">}</span><span class="token punctuation">,</span> <span class="token variable">$content</span><span class="token punctuation">)</span><span class="token punctuation">;</span>
<span class="token variable">$context</span><span class="token punctuation">[</span><span class="token string double-quoted-string">"footnotes"</span><span class="token punctuation">]</span> <span class="token operator">=</span> <span class="token variable">$footnotes</span><span class="token punctuation">[</span><span class="token number">1</span><span class="token punctuation">]</span><span class="token punctuation">;</span>
<span class="token keyword">return</span> <span class="token variable">$content</span> <span class="token operator">.</span> <span class="token variable">$output</span><span class="token punctuation">;</span>
<span class="token punctuation">}</span>
<span class="token function">add_filter</span><span class="token punctuation">(</span><span class="token string single-quoted-string">'the_content'</span><span class="token punctuation">,</span> <span class="token string single-quoted-string">'mytheme_extract_footnotes'</span><span class="token punctuation">)</span><span class="token punctuation">;</span></code></pre>
<p>The site I was working on used Timber, so I actually wrote the following:</p>
<pre class="language-php"><code class="language-php"><span class="token variable">$output</span> <span class="token operator">=</span> <span class="token class-name static-context">Timber</span><span class="token operator">::</span><span class="token function">compile</span><span class="token punctuation">(</span><span class="token string single-quoted-string">'/partial/footnotes.twig'</span><span class="token punctuation">,</span> <span class="token variable">$context</span><span class="token punctuation">)</span><span class="token punctuation">;</span></code></pre>