Stitching light, analog and digital

Detail of stitched interpretation of AI reading of a darn

Lately I’ve been turning my attention to both the phenomena and the artifacts of pixels by stitching them. It began a couple of years ago when I made a casual observation to the designer/artist David Young that  some of his digital images bore an uncanny likeness to textiles. David’s images are particularly interesting because he creates them (actually, co-creates them) with artificial intelligence and machine learning programs. Sometimes he trains the AI on color blocks, other times, on landscapes.  We decided to see what would happen if he trained it on an actual textile—specifically, one of my darns, which are small, stitched weaves of thread. The result is an ongoing project we call Echo Chambers. (For the full backstory see: Negotiating the Image with Pixels and Thread.)

Print of the AI’s interpretation of a darn, manipulated by me with David Young’s custom software

In a nutshell, David trained the AI on a dozen or so photos of my darns.1 (This is the opposite of usual practice, which is to train the AI on hundreds or thousands of images so it can replicate whatever it’s been shown).  In any case, this AI builds a neural network of a darn      and the neural network generates images from its understanding via pixels. At some point, David stops the process, selects a still frame, and then we manipulate it with his custom software. The resulting image is printed on paper and transferred to mesh or canvas.  I then stitch it back into existence by referring to the lines on the mesh as general coordinates, but mostly by looking at the printed image.

When I started this project, I was simply curious to see what the translation would yield.  It quickly became a challenge to see how closely I could replicate David’s images. But the more I worked on them—I’ve now made 8—the more I wondered whether I was just doing paint by numbers with embroidery thread, or if the project could open up different ways of understanding the binary nature of stitching (which goes above and below a plane) and the binary of the digital’s ones and zeros.

Still from AI/machine learning view of a darn rendered in pixels

First I had to figure out exactly what it was that I was stitching, namely pixels. Thanks to Hugh Dubberly and Alvy Ray Smith’s book The Biography of a Pixel, I learned that pixels are derived from the crests of spatial waves1—waves mathematically derived from the patterns and intensities of light and dark that fill our visual fields. (The model comes from our biology. The eye’s retina transforms a visual image into a neural code of sine-waves that take their dimensions from the size and contrast of what we see.2)  

As far as I can tell, pretty much the same thing happens when analog is converted to digital. Except that it’s a sensor, not the brain, which translates the waves of contrast and scale into pixels. And for what it’s worth, Alvy Smith is at pains to say pixels aren’t squares.  The information contained in the crests and valleys of the waves is just translated into squares by reconstruction filters, or sensor grids, that collapse those peaks and valleys.3

Smith also tells us that both analog and digital images contain a lot of information that’s unavailable to our eyes because there is an infinity of points between the crests of the waves. Oddly, or maybe reassuringly, analog infinity is larger than digital infinity. 

So digital images are a compression of what we see in the real world. Which means that this Echo Chamber project is about the compression and decompression of my stitching. Compressed data is decompressed by stitching. And that introduces light back into the picture as it were:  Light bouncing off threads, light changing with the time of day, and with the effects of local color.  

But more than that, for me, it is an exercise in bringing the digital image into another state of being that is only partly predictable. It’s like knitting into the void of the space ahead of your needles. You both know and don’t know what will appear. No matter how mundane,what emerges, that something emerges, is always surprising. 

In any event, I’m wondering if the AI’s neural network input (my darning) and its output (the digital still) are yielding pregnant images. Perhaps the stitching is tapping into the infinity of points on both spatial waves and light waves to reveal otherwise invisible patterns—patterns that are still born when they’re printed, still born until I stitch their true subject, which is not a darn but light. 

What I do know is that stitching along and around lines of color (derived from pulses of colored light) printed on mesh, and then seeing them come into three-dimensional stitches is a bit uncanny. The result has the quality of a doppelgänger—albeit off-kilter and inexact. In these “Echo Chamber” pieces, there is a movement that is both seen and sensed, animated by reverberations from the liminal space between subject and object.  A phenomenologist might describe it as suspending conceptualization to reveal experience, which is probably why I still can’t pin down the meaning of the project, except to say that it seems to operate beyond the binary. Logic says it is not, but I cannot explain away the surplus of meaning I sense and see. 

Notes
1. Alvy Ray Smith. The Biography of a Pixel. (Cambridge, Mass.: MIT Press, 2021), 32.

2.  “Quality of vision in refractive and cataract surgery, indirect measurers,” Tais Renata Ribeira Parede, Andre A. M. Torricelli, Adriana Mukai, Marcelo V. Netto Review Arquivos Brasileiros de Oftalmologia · December 2013

https://www.researchgate.net/publication/260131039

3. Smith, Ibid., 51.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s