HC Gilje

projects texts book archive blog

 

Within the space of an instant

by HC Gilje

2005

From the anthology "Get Real", published by Informationsforlaget and George Braziller.

Video is a medium of time. Realtime video allows the artist direct access to the videostream which can be transformed and molded on the fly using customisable software and hardware.

Realtime processing of video enables the artist to work more intuitive and immediate than before, and accessible programming environments gives the artist full control over structure and content. More specifically it allows him to create a situation for events to unfold within, the live experience is created in the meeting between structure and chaos.

My work focuses on the perception and conception of reality, through exploration of the physical experience of the moment, which involves both a spatial and temporal aspect. I seek to create suspended moments of time in a space, by creating fictional spaces within the videoframe, and by transforming or creating a physical environment using videoprojections.

My departure point was the 3-headed project Videonervous, iniated in 1999, which set out to explore video as a live medium through 3 collaborations within more established live art genres: music, dance and theatre. This later lead to my improvisation work in 242.pilots and BLIND, my involvement with the dancecompany Kreutzerkompani and installations like shadowgrounds,split, storm and sleepers.

VideoNervous

The project was a collaboration with performers within live art related fields, and it produced three very different works: one jamsession with a musician exploring the musicality and immediacy of live video, one dance performance creating dynamic spaces by projecting video on the dancers, and a hybrid installation/performance setup exploring video as both a narrative and spatial element.

The main focus of the project was to make use of the immediacy which digital technology offers, and develop video as an instrument, but also to create fluid spaces through projections, and to explore the ability of video to function as both a set design element and a narrative element.

The project was based on recent technological changes, which has given video much of the same possibillities as working with digital audio, including sampling, software and hardware control of the video stream, and cheaper and brighter projectors.

The analog video tape has both a physical and a time-based limit; because it has a beginning and an end, it takes time to access a specific video clip on the tape. Digital video transcends this limitation: video is a stream of information, and can be accessed and transformed instantaniously on a computer. The directional-time aspect inherent in the tape medium dissolves into Random Access Video, each frame of the video can be fetched with a keystroke. From being a mechanical tape-based playback system, video has evolved to become a reactive organic medium, an extension of the performer’s actions.

VideoNervous focused on the performer and the computer being part of the same system. The instrument, a combination of software and hardware in relation to the visual material, was intended as an extension of the human´s actions, like Heideggers ideas of the zeug. The human senses were the system´s perception of the environment.

VideoNervous was conceptualised as an extended central nervous system in which many of the processes parallel our daily processing of reality: sampling from reality, transforming the samples. The harddisk could then be seen as a memory bank, the video samples as pieces of a reality: memories, dreams and thoughts triggered by impressions from the outside world.

242.pilots

242.pilots was born out of a necessity to define and refine a new genre, live video improvisation: to focus on live video as something more than visual eyecandy for laptop concerts and club nights. The name refers partly to the software initially used to create our own video applications, and also to the way we work: We create programmed structures which we navigate through during a performance, each pilot has his distinct way of processing the images before they blend together.

We established 242.pilots as a visual band with Kurt Ralske, Lukasz Lysakowski and myself as members, and invited different musicians to collaborate with us, instead of the other way around. We specifically chose to perform in art-related venues like galleries, cinemas and theatres to tie our links to early avantgarde cinema of the 20s and experimental cinema of the 60s, more than to the VJ club scene.

Being completely improvised, each performance is unique, and is of course also influenced by our different audio collaborators. As our sound collaborator on the 242.pilots dvd, Justin Bennett explains: "it's like improvising a soundtrack to a film, but the film also responds to what I do. [the pilots] are also listening, obviously. So it's very much a two-way process."

242.pilots have been compared to free-jazz groups, operating on the outer fringes of experimental cinema. Using our individual video instruments the three of us respond to and interact with each other´s images in a subtle and intuitive way. The images are layered, contrasted, merged and transformed in realtime combining with the improvised soundtrack into an audiovisual experience.

Our first performance was in Rotterdam in 2001, and we have later played at numerous festivals and venues, including transmediale,ultima and mutek, montreal museum of contemporary art, american museum of moving image and guggenheim bilbao. In 2002 we released a dvd on new york label carpark records, based on a performance recorded in Brussels. 242.pilots received the image award at transmediale in 2003, for mastering “the multifarious aspects of image production under the conditions of digital, interactive and network-based media”.

Instrument-building

Software has become a very important tool for artistic creation with digital images. In the video field, advances have been made since the mid-nineties in terms of the real-time manipulation of images and their adaptation to sound and other digitally based data flows.

My first experience in a live context with realtime video software was with x<>pose, one of the first video-triggering programs: by connecting a MIDI-keyboard to my Mac, I triggered different Quicktime loops with the different keys, and could also apply simple effects and vary the playback rates of the movies. This is still the basic structure of a typical VJ program today: a bank of clips and effects which can be triggered manually or synched to the music.

In 1998 I was introduced to Image/ine, created by Tom Demeyer in dialogue with Steina Vasulka at STEIM in Amsterdam, which was a much more powerful tool for manipulating and layering images, combining live video streams and prerecorded loops. Its most interesting feature was probably how it gave possibillity to sample video into a buffer with a very direct usercontrol of the buffercontents. Image/ine was designed for performance purposes, and used a clever combination of keypresses, mousemovement, midi controllers, audio and lfos (low frequency oscillators) to create a very responsive visual instrument in a live context. Image/ine and x<>pose were my main tools in the VideoNervous project.

Image/ine was great in many ways for live purposes, but lacked the ability to make more advanced programmable control structures, necessary for installation setups. I started experimenting with using the graphical programming environment Max on one computer controlling the behaviour of Image/ine on another, which I used in the installation node in 1999. Node is an interactive computer/video installation in the form of a metal well that provides a meeting place in space and time by collecting faces and reflecting them back at the viewer. Sensors connected to the computer running max sensed when people were above the well, and triggered different states in Image/ine running on the other computer: to record a new image, play back a video sequence, play the buffer of previous faces or to show the livestream from the camera.

Max is a program for making programs, you build your own programs in a very visual way by working with boxes representing objects with different functions. These boxes have inlets and outlets which allows them to receive and send information to and from other objects, which are connected together with patchcords, remniscient of analogue synths.

A program made in Max, often referred to as a patch, can be compared to building a circuit board, where you control the flow of information using the various objects connected together, so you get a very visual representation of what the patch actually does. The program structure and the GUI (Graphical user interface) as presented to the user are identical.

Max was originally a visual programming language developed at IRCAM in Paris, based on the midi protocol to make the computer able to talk to synthesizers and external controllers like faders,knobs and sensors. Later it was expanded to include audio processing, and in 1999 the first version of the realtime-video objects nato+055 was released by netochka nezvanova, opening up tremendous possibillities for working with realtime video. Over the years nato´s capabilities expanded and several other sets of video objects surfaced: mainly jitter, softvns and auvi. So now there is a wide array of flexible tools for recording, playing, combining, creating, analysing and manipulating video, graphics and sound in real time. The best thing with this environment is the amount of objects made by the users of max, vastly expanding the capabilities of the original application.

Max and the range of video objects available make it possible for artists to design their own video programs without the knowledge of a programming language like C.

Some of the decisions being made during a live performance are:

selecting a appropriate video source file,combining multiple images (in additive or subtractive ways), applying effects to images (eg., blurring, warping, distortion, etc), dynamically varying the parameters of effects (eg., depth or speed of effect) or applying effects only on parts of images, dynamically modifying the playback speed of videoclips, composition of multiple images in the frame, time-based effects (buffering, feedback etc).

These and many other processes offer a huge range of options for expressive,real-time control of the image. Some of these processes are random, some of the parameters might be controlled by lfos (sinewaves, squarewaves etc), other parameters are mapped to onscreen controllers or external faders or knobs, creating a multitude of choices to be made and seemingly endless combinations, so therefore the structure of the setup is a key factor in how the actual visual output turns out.

The extended now

I have found many similarities between my approach to video and the way musicians work, especially within the noise and impro scene: The idea of video as a flow, an energy, something that can be molded and transformed like a substance, the focus on texture and the moment instead of creating a musical structure. A live setting implies trying to have some control over a chaotic energy, or to create a dialog between performer, structure and chaos.

In music, the whole idea of improvising deals with the immediate presence, the notion of nonlinear time instead of a sequential time, where time is, where presence is more important than position, and the notion of vertical time as the extended now.

The extended now is where the physical experience takes place, it is the presence of your body in the present, mediating between inner mental space and outer physical environment, it´s the ultimate mode of sensibility, and it has for me also become a method of working. Vertical time is closely related to the process of dreaming, of being there as an experience unfolds, where the images and sounds are a chain of associations, often not intended by the creator, but created in the live experience meeting between performer/environment and audience/visitor.There is not a big difference for me between improvising in a performance and filming with my camera: intuition, immediacy and responsiveness are in focus, grasping the moment is the most important thing. The videocamera is an instrument of perception, the videopatch an instrument of conception. What is produced is a chain of associations, in the meeting between my mind and the world.

Structure

Improvisation needs to be structured, and there are several strategies.

One strategy is to think of structure as a way of limiting the options. In principle, my instrument can do anything, so I need to make limits to avoid total anarchy and chaos. The interesting results often happen in the tension between structure and chaos. The limits are determined by the situation, who I play with, what kind of space, what type of project, technical setup etc.

I build a new instrument for each new project, ranging from one-time collaborations with audio artists, other impro events, but mainly associations with a regular group of people for specific projects, mainly jazzkammer, kelly davis, composers yannis kyriakides and maja ratkje, and choreographer Eva-Cecilie Richardsen.

These collaborations have turned in quite different directions. My collaboration with noise duo jazzkammer started already in 2000 with a series of concerts, followed by a tour to japan in 2002, which later resulted in the commissioned audiovisual composition night for day in 2004. We have also collaborated on a Kreutzerkompani performance in 2004, Twinn, and for the Voice performance with Maja Ratkje at Ars Electronica in 2003. My collaboration with kelly davis in BLIND is in many ways a continuation of my work with 242.pilots, but by being a fixed audiovisual unit we can develop ideas more over time, allowing for more subtler work. We released a dvd of live recordings on german label audioframes in 2004.

Yannis Kyriakides is a composer and improviser who I have worked with on three projects. Our most extensive collaboration so far, Labfly dreams, was made for a 40 piece orchestra and premiered at Queen Elisabeth Hall in London november 2003. 4 composers and 4 video artists were invited to create a composition each, and the 4 different works were performed live by the orchestra and the video artists. Kyriakides and myself created our piece based on speculations that fruitflies might dream during their short lifespan, and we were wondering what they would dream about.. This was a new challenge as the music was composed and rehearsed, while I wanted to keep the video as open and flexible as possible. I created different states for the different parts of the music making it possible to have a quite tight structure but still allowing for improvisation within each part.

Another way of structuring improvised material is by giving it a form after the performance, by recording the performance and import the footage into an edit application. For instance, in my BLIND collaborations I usuallly always record the performances and use the material as source material for other work. Shiva is completely constructed from fragments from different live performances. The audiovisual composition night for day made in collaboration with jazzkammer was a constant negotiation between the live raw energy derived from improvisations, and the more subtler composed/edited parts, resulting in deconstructed footage from Tokyo transformed into 13 dreamlike scenes.

I sometimes use realtime video processing for the sole purpose of creating source material, not in a jamming situation, but as a way of exploiting the responsiveness of the instrument to manifest the energy of the moment, for instance in the video sunblind.

In working with installations I attempt to make a system which breathes by itself. This means setting up certain rules by which the installation system makes decisions, sometimes in dialog with the visitor, like in the work for the Get Real exhibition. This adds another level of structure to the work.

I think of many of my video installations (like sakrofag,storm, shadowgrounds) as dreamweavers, using a stream of consciousness analogy as structuring element, where the main decisions are which videostreams to play, and when. Of course there is a limited amount of videosources available, so there will be repetitions of content, but not in the same sequence, thus creating an everchanging stream of images. Since there is no beginning or end to these works they also create a sense of timelessness or suspended time. Locations are pulled out of linear time and into nonlinear time, creating mythical spaces.

Space

I look for the point where impression and expression blurs. According to newer cognitive science, it is quite probable that the same parts of the brain dealing with perception and motion are also used for conception, so perceiving and conceiving a reality are strongly connected, mediated through the body. The mind is embodied, meaning that all our mental facilities and concepts are grounded in physical experiences. The way the human body is made up, determines how we categorise, and thus filters, the information from the world. This in turn strongly influences how we think and relate to the world. Our relation to the world is mediated through our brain structures, our bodies and interaction with the environment.

If our mind is embodied then how we relate to a physical environment have emotional consequences. Moving in space and organising in space is a way of thinking, dreaming and remembering. Modern dance is relevant in meaning production from these premises: dancers have a bodily knowledge, and choreography is about organising movements in relation to space.

The connection between mental state and physical space is old. The old greeks used temples as a way of remembering speeches: By linking different parts of the speech to physical places in the temple the speech could be reconstructed by mentally moving through the temple space. Landscape paintings often reflect mental state, american film often reflect mental state/reactions to physical elements.

Working with computers often implies relating to virtual worlds, the interaction between man and computer takes place on the premises of the computer. I want to move away from the abstract slick virtual space and move the meetingplace of man-computer to the physical space: to make a immersive reality that a person can walk around physically and relate to the projections of the computer, instead of being dragged into a virtual reality. The architecture of the space structures your experience. You don´t have to relate to mouse/keyboard/monitor/goggles/helmet, I create dynamic audiovisual spaces which the audience relates to physically, and thus emotionally. This is also an opportunity for several people to experience the installation together at the same time, in opposite to most interactive installations, where only one at a time gets a chance to relate to the work, while the others are waiting in line.

Time

The body is the measuring rod for how we experience time. We cannot observe time in itself, we can only observe events and compare them. Time is defined through regular repeated motion, like the ticking of a clock, the motion of the sun, or the neural firings in the brain which occur forty times a second.

We all encode our experience of time at different rates. A single moment from several months ago may consume our thoughts, yet a whole summer ten years ago may have completely vanished from our memory. We stretch and condense time until it suits our needs.

Most of our understanding of time is a metaphorical version of our understanding of motion in space. There is an area in our brain dedicated to the detection of motion, not for time, so this could explain why time is often conceptualized through motion in space.

Concepts of time often relate to space: the metaphoric time as locations in space (the future is ahead of us, the past behind us and the present is here).

In musical terms, extreme slow time can be experienced as static space, so space could be seen as a degree of time.

Timebandit is a continually developing series of computer-video algorythms/installations/performances based on live input, dealing with different ways of presenting and transforming space over time, and time into space. It is a way of restructuring reality, by breaking down time, recombining spaces into new realities.

The first use of the time bandit structure as an installation in a public space was for the Autumn Exhibiton in Norway, fall 2000, as part of the group installation sement where a live feed from the camera pointed at the door of the space was captured, chopped up and transformed and projected on a screen meeting the visitors. The next development of this project was for the sommerfest 2001 exhibition at podewil in Berlin, where the concept of capturing and transforming the live input was refined by working on different ways of combining the buffered material with the live feed, emphasizing the motion in the space in one mode, and in another focusing on the static quality of the space where the people moving in it become mere shadows. The final major installment was at Kunstnernes Hus, Oslo in 2002, again as part of the norwegian autumn exhibition. This version tried to incorporate the previous realisations into one work.

During work on a liveperformance with the 242.pilots at Steim, Amsterdan fall 2001 I developed an algorythm that spatializes time by layering random parts of incoming video feed on top of the existing image, creating a continually changing collage reflecting different layers of time, a concept I have also developed for my live performances (one document of that is the video Crossings) and the installation Storm.

Live

One issue with live video improvisation using recorded footage is that the actual liveness gets lost for part of the audience, many think the video is just playback, as there is no obvious link to the person sitting with the laptop and the output on the screen if you have no previous knowledge of this. It has never been the intention to direct too much attention onto myself as a performer on stage with 242.pilots or solowork, because focus should be on the audiovisual experience.

For me, the liveness exists in the situation itself: if you weren’t there you missed it, and subsequent documentation is merely a substitute. I like the idea of the unstable media, meaning that a work only exists within a short time-space and among the people who are present. This gives it a sort of fragility. Also, considering that the average attention span is getting shorter, live art has a clear advantage over the gallery context. Here, the audience actually choose (and sometimes pay) to watch what you are making, so while you are performing you have their attention. This is a privileged situation!

I brought my experience from live improvisation and the timebandit project into my work with dancecompany Kreutzerkompani, a company I established with choreographer Richardsen in 2000. Eventually I found my work with dancers to be a perfect occasion to use live as raw material, no need for prerecorded footage. This was partly from experience with previous kreutzer productions where it was always an issue to balance the use of video in relation to the dancers. When using the dancers as live source material, focus turns from the dancers and video as distinct elements to the idea of motion itself. The video is abstracting the movements of the dancers, and then it is not so important which element has the focus at one point, our main interest is motion in space.

The great benefit with the max environment with its videoobjects is that since it treats video as a stream of information: There is no difference in working with recorded footage or a live feed from a camera, they are both streams of images. This means I can do the same type of transformations to the live video as I would in my improvisation work in general.

Our first experiment with live video was synk, which premiered at the Ultima festival in Oslo, 2002. It involved one dancer, justin bennett on sound and myself on video.

The idea of synk was that no prerecorded video or audio would be used, only material sampled during the performance was allowed.

We wanted to investigate live as raw material: to impose a structure on a live situation to allow for unpredictable results within that frame structure. My setup allows me to sample the dancers movements and create loops which then recombine with what the dancer is doing on stage. I also use several delaybuffers, feedbacksystems and blending a stream of consecutive images into ghostlike images. The interplay between my program, the variations of the dancer and what is picked up by the camera creates a unique performance everytime, mainly influenced by the shape and look of the space we perform in, everything from white gallery spaces, small blackboxes and big stages.

Realtime opens up for responsive environments and responsive instruments, allowing for tighter bonds between intention and action, mental space and physical space. Realtime extends video as a medium of time, to be a medium of the extended now.

references:

lakoff,johnsen: philosophy in the flesh

doug aitken book (phaeidon)

shadowgrounds catalogue

242.pilots documentation+dvd.

compusense project description

petunia workshop (especilly peter rosenquist and christian blom)

bill viola: reasons for knocking at an empty door.

Free impro symbiosis
an introduction to my Lichen films by HC Gilje
2024


mare incognitum
exhibition text by Simon Thykjaer
2024


HC Gilje on exploring unknown landscapes
Interview by James Lee for Hakapik
2022


lysfanger nr.1
by Hanan Benammar
2018


Pings: Matter, Environment and Technology in the work of HC Gilje
by Mitchell Whitelaw
2017


Conversations over time
by Anne Szefer Karlsen
2017


Siding with the light
by Joost Rekveld
2017


Conversations with Spaces
by HC Gilje
2017


7x7 cirkler: Else Marie Pade + HC Gilje
exhibition text for Klangrum Møn by Morten Søndergaard
2014


TIME, SPACE, CHANGE, SPEED, MOTION - Interview with HC Gilje
by Nicky Assmann
2013


Right Here, Right now - HC Gilje´s Networks of Specificity
by Mitchell Whitelaw
2009


Within the space of an instant
by HC Gilje
2005


HC Gilje – Cityscapes and the 
cinematic avantgarde
by Per Kvist
2005






preface to Shadowgrounds catalog
by Jeremy Welsh
2001


interview with HC Gilje
by Andreas Broeckmann
2001