F/LAT - Free/Libre Art & Technology

Platform for education and research in free/libre arts & technologies

User Tools

Site Tools


audio-video live mixing system

F/LAT is inheriting the experience from The Metahub experiments, mixing live signal form multiple sound and image sources, on multiple outputs, displays and locations.. graphical schematic

the Open Source Video Lab from Constant is also growing the F/LAT environment http://www.constantvzw.org/site/-Open-Source-Video,114-.html

project in course :

network and distribution

Digital Graphitti

Open-source fabrication

associated studios :

Libre Hardware


 This project intends to imagine, design, build and share a reproducible fabricator machine, made mainly from plywood, build on top of the knowledge and principles of the reprap project and its children. The idea is to build a generic machine capable to reproduce most of its own parts, using the rawest materials possible.

this model is intended to use 18mm plywood sheets, standard simple iron bars and as few as possible specialized mechanical parts (bearings and wheels). The electronics is based on the generic reprap construct, the precise determination can vary.

All rights are to be placed under GPL licence, so freedom is ensured.

There exists other machines like that in the world, but to date, none of those have their blueprints available and/or under a free licence (some say they will in the future … )

If you have information on the contrary, contact us, so we can add credit and share knowledge.

VR Libre Lab

Virtual Reality Libre Lab

The VR Libre Lab is a laboratory to explore, develop and teach the the open-source technologies and methods of Virtual Reality medium for creative experiences. It is part of the F/LAT platform and open to cross polinisation. We explore all the relevant facets of what is called VR : immediate sensory illusion of space and time displacement mainly trough visual perspective, audio spacialisation and acousmatics, and other sensory interfaces. including : VR helmet or complex cave and video mapping, Audio spacialisation and composition, 3d reconstruction of space, multiple body tracking, sensing and interaction, media interaction, mechanical interaction, …

→


“This Wiki is tries to summarise notes and critical ressources about current processes at stake in the current augmentation of Body Quantification in contemporary networks. […] ”



Through a complete referential of the quantified self vocabulary, its project and its participant, the task is to unveil the technological processes in the quantified self movement and look in the ways by which it systematically organises a new, social reference, a new body language ; or otherwise formulated : Can quantified self as a cultural technique, a Kulturtechniken, impact the social sense of self integrity.

The research will examine uses and modalities of social integrations they are most of the time linked to an online platform from different sources :

  • Commercial platforms such as the sport gear.
  • Mobile applications that use embedded sensory devices.
  • Social platforms as the quantified self .

It will look at the uses of those features and the choices but mostly it will examine the vocabulary and social patterns that drive those exchanges and the way it formats a relation to the body.

→

Apertus Axiom Open Source Camera projects


the Apertus Axiom Camera is the first and leading fully open source professional camera system, providing amateurs, tinkerers and professionals a versatile tools they can use and abuse to deploy their arts and ideas.

this project is about learning to use the Apertus Axiom Open Source Camera system with multiple setups and goals. It will involve professionals and amateurs of cinema, technicians, directors, artists, coders, hackers, and anyone interested in the versatile and open tools this camera system is.

a First workshop is organised on the 24th of january 2015, at F/LAT space . please register with motivation at init@f-lat.org as places are very limited

→

dev/research_proj_list.txt · Last modified: 2016/06/14 10:35 by olm-e
Translations of this page: