It’s been a while I haven’t posted here (but at least this mean I’m kind of busy)!
A few years have passed since we’ve started dreaming of a decentralized, open data network for all our (web) projects. Happily, researchers have worked on the semantic web for quite a few years and with the latest W3 drafts, I think we’re almost there.
I see a lot of usages for this server: backing the data of the commons in a p2p way (think Unisson, TILIOS, p2pfoundation, Virtual Assembly), being the backbone of any cultural or research institution, allowing rapid data backend deployment, …
Durant le week end du 22 & 23 Juin, nous avons organisé le premier Open Bidouille Camp Lillois, à la gare Saint Sauveur. L’occasion de (re)sortir toutes les bidouilles accumulées par les Makers Lillois mais aussi d’en prévoir de nouvelles avec cette échéance tout en démocratisant le DIY.
Et… ce fut un réel succès ! 3000 personnes selon organisateurs, 200 selon la police bien entendu, mais les chiffres ne sont pas importants. Ce qui compte, c’est le public, qui était présent et dans toute sa diversité : un vrai melting-pot en terme d’âge, de sexe, de milieu, etc. Et ça, pour moi, c’est notre victoire.
Pourquoi ? Peut être le contexte économique ou la prise de conscience écologique ? C’est difficile à dire, mais clairement, il y a un intérêt à (re)faire soi-même, à mettre de soi dans les objets, à se les réapproprier, les réparer ou encore les détourner. Apprendre, construire, utiliser consciemment et réparer plutôt que consommer et jeter ; c’est ça qui rassemble.
Parce qu’en fait, il n’y a rien de révolutionnaire dans les technologies présentées. Voire même, c’est du low-tech si on se compare aux industries de pointe. Mais ça, c’est uniquement considérer l’aspect technologique.
Ce qui est révolutionnaire, c’est la manière de concevoir et d’approcher cette technologie. Les imprimantes 3D existent depuis des dizaines d’années et pourtant… tout le monde fait des yeux ronds à la première rencontre d’une de ces étranges machines. Mais ça n’est pas étonnant : qui a déjà eu le privilège de s’approcher d’un tel monstre à 50.000€ dans une salle fermée au fin fond d’usine elle-même réservée à une poignée de personnes ? Très peu d’élus. Et avec le DIY, c’est su ça que tout change.
La révolution actuelle vient non pas par la levée d’un verrou technologie mais bien humain/social, et ce, grâce au DIY, au libre (Open Source) et aux réseaux :
Je n’achète plus ma machine prête à fonctionner : je la construis et j’apprends son fonctionnement ;
Je ne la jette plus quand elle dysfonctionne : j’ai appris son fonctionnement, je peux la réparer ;
Les pièces ne viennent plus de l’autre bout du monde : on m’explique comment la produire localement avec une autre machine déjà existante ;
Je ne suis plus dépendant d’un fabriquant : les plans sont disponibles sur Internet, je suis autonomisé ;
Having fallen in love with gnome-shell-pomodoro, I’ve stopped using my good old kitchen timer for the pomodoro sessions. The problem is I used to put the timer on top of my screen, so that my coworkers could see when I was available or not. With a software such as gnome-shell-pomodoro, this is of course not possible anymore. And here came the disturbing questions every 3 minutes…
So first, I tried to use both the software and the timer, starting them (almost) together. It was boring, and as you may guess, I stopped doing that after 2 days. I had to find something better, something lazier & a bit more clever. I therefore decided to hook up an Arduino with a flashy, blue, LCD screen to my computer and started hacking a little sketch for it. Then, I modified gnome-shell-pomodoro to send commands to the serial port when changing its state. The result was a simple, yet effective, way to show to my coworkers when I’m available or not… and in how much time they will be able to ask for a question.
Once it was working, and since I don’t like maintaining forks, I asked the upstream if it was a good idea to add support for this into the software directly, through Dbus. The answer was quite positive, so I’ve started working on adding that, wrote a python daemon and published the arduino sketch into this repository. It’s still very rough, but usable. I’ll enhance the dbus support and usability on both sides during the next weeks, to match my daily usage and make it cleaner.
If anyone’s willing to add support for another software or enhancing the arduino sketch (it really needs a better timer), then please go ! :-)
Now that I have an idea of a cool hardware for listening to music, I’ve been experimenting with pulseaudio network support. Over Wireless.
They are two ways of using pulse for network streaming : RTP and TCP. The first one was a total failure for me (using pulse 1.0). The second, however, was very straightforward to setup and is almost working perfectly. I only get micro-interruptions every 10 seconds. But that’s enough for not being suitable.
First, the upload rate is about 200kb/s, so maybe that’s too much for a cheap wireless card (I’m using the ath9k driver). According to Lennart, the problem is partly due to pulse not compressing streams over the network. One possible solution is to use the CELT codec (from Xiph), which has been designed to produce high fidelity audio with very low latency and power consumption. CELT’s features are the following:
Ultra-low latency (typically from 5 to 22.5 ms)
Full audio bandwidth (≥20kHz; sample rates from 8 kHz to 48 kHz)
A quality/bitrate trade-off competitive with widely used high delay codecs
Packet loss concealment
Constant bit-rates from 32 kbps to 128 kbps and above
A fixed-point version of the encoder and decoder (interesting for embedded boards)
Given the random network latencies, I can assume we have around 2ms using a wifi connection, plus pulse and celt, that would be something around 10ms… which would still be pretty ok. Having this would make a close alternative to Apple’s RAOP protocol that uses ALAC. Still, ALAC is lossless while Celt is limited to 128kb. So, FLAC or ALAC itself maybe ? Even if it eats more processing power, for desktop and powerful boards, that would still make sense.
To prevent multiple encodings/decodings, another trick would be to implement a server-side decoding and therefore write module for PA that supports speex, vorbis, etc. But that’s another story.
Second, this could also be a jitter/latency problem. In fact, I’m a bit more convinced this is the real problem. According to Maarten Bosmans, there’s still room for improving these things in the pulse core ; so that’s good news. I don’t understand why my wifi N card, if I’m not doing anything else, wouldn’t be able to keep up with the 200kb/s rate. But that’s only intuition here.
Conclusion: before going further, I’ve to understand first if this is a latency or bandwidth-related problem to pick the right fight.
Durant ces dernières semaines, j’ai co-réalisé un projet nommé Damassama, une installation d’art numérique à l’initiative de Léonore Mercier. Le travail s’est réalisé en coopération avec l’équipe de recherche MINT, de l’Ircica. Maintenant que l’installation est terminée, vous pouvez aller l’essayer jusqu’au 24 Juillet au Fresnoy (Tourcoing), durant l’exposition annuelle Panorama (#13 cette année). Petite présentation de cette installation :
Damassama est une installation constituée de deux plans disposés en demi-cercles sur lesquels reposent des bols tibétains. Des marteaux frappent sur ces bols afin de les faire sonner, et ces marteaux sont activés grâce à la gestuelle de votre corps. Vous vous placer au centre de cet amphithéâtre et pour déclencher un bol, il suffit de tendre le bras dans sa direction. Pour faire une montée de gamme, vous pouvez balayer avec vos mains devant vous ou encore lancer vos bras pour un accord. Le tout étant harmonisé sur une gamme orientale, on se prend vite au jeu de créer des atmosphères un peu mystiques !
Côté technique, le seul capteur utilisé est une Kinect. A coup d’OpenNI, Mididings, Jack, Python, C++ et j’en passe, nous avons pu relever le défi de réaliser cette installation complexe, ce qui n’était pas gagné d’avance :-)
Bref, je vous laisse visionner les quelques photos, ça vaudra mieux qu’un long discours…
I’ve been a long time fan and user of the Python programming language… so I had to show it in the office. I really love the trainspotting-like poster but I wanted also something more zen, something that shows the genius inside this language. Therefore, i’ve just import’d “this” and started to copy and paste some of them in inkscape. Here’s the result. It was never finished, but you know, you’re welcome to send patches :-)
So, here’s the repository, feel free to fork it (CC-BY-SA 3.0) !
For some reasons, I have to come back to web development. I hate having to write websites, because I feel that so boring, but what I want to achieve is more something that is a close to a real application, but web-based. At least, the business code and the goal is going to be much funnier. Anyway, in a pythonic logic, I therefore fired up django and started writing this web app.
The first step lead me to collect a few of reusable apps. One of the problem is that most of them aren’t packaged or if they are, most of them are too old compared to what is released. Some cool guys had a good idea : include a way to fetch dependencies automatically using the “manage.py” command. This piece of code is called django-dependency. All you have to do is something like this :
While it looks sexy, this software have some problems :
It doesn’t support GIT
It relies on shell commands (not the python libraries) for RCSes
It doesn’t handle checking out/updating a specific revision
I therefore decided to patch it. But, while I was adding things, I felt something was wrong and I ended up rewriting it completely.
This is why I now have a new system that is very close to the idea of django-dependency (thank you guys for this awesome idea !). I’ve called it django-autodeps and it is licensed under the GPL v3. I plan to released it as soon as possible (it works well for me, but I prefer to use it a few weeks, fix bugs, etc before releasing a stable version).
I have recently ordered a few X10 equipments to play with. For those who don’t know, X10 is a standard for Home-automation (domotics) that enables you to remote control devices over a classical power line. A typical scenario could be for example remotely changing the intensity of a light. This has many “serious” use cases but honestly, for me, this is more a fun device to play with. Even if I may use them later for my home, I guess, I’m more interested in seeing how they can be used in another contexes such as artistic installations.
One of the devices I’ve ordered is a USB controller (model CM15) that you plug both to a usb port and the powerline. It can also receive and send events coming from wireless controllers, but that’s out of the scope for now.
So, first, I’ve of course started lurking at the existing free software to control all these toys and discovered a few projects such as Eclipse, MisterHouse, etc. They all seem to be interesting but my initial plan was more to have a good library so that I can integrate them into my existing projects. I’ve then found a low-level perl library, another one for the CM17 in ruby and my best result was a C one with a demo application.
The later works like a charm and allowed me to enter the cool world of X10 by sending some commands to my actuators. Fun. But, even if C is good for many things such as operating systems, we nowadays tend to use higher level programming language, such as my prefered one : Python.
After searching for some CM15/X10 stuff for Python, I’ve only found three libraries : a module for the firecracker (CM17) which exposes a low-level interface (very C style, low level commands) and two others called PyXAL and PyX10 that basically wrap the X10 Abstraction Layer library. While they may seem interesting, these three projects are no more maintained and the latest commits are from 2000… erg !
One possible solution could have been to wrap this C code to a Python module, but even if it works well, I had the feeling that I could rewrite it in a more consive way and in an object-oriented manner. Moreoever, this kind of USB devices are usually not too much complex to use, leading me to prefer to write a pure Python module. Beginning of another story :D
My goal is to make a generic, yet not complex, X10 library. Using the documents and specifications I’ve found, I’ve already written what’s needed to detect, setup the CM15 and made a pretty object-oriented interface to talk to the devices. I’ve also added the concept of House and plan to have virtual groups to be able to send a command to a set of devices you have chosen. Have a look :
1 2 3 4 5
lamp = dev.actuator("A2")
house = dev.house("B")
By the way, the module can be used for any other X10 device, it’s made for that. All the code is almost generic, you just have to write your own controller class. Also, the API is likely to change so if you have suggestions…
I’ll publish the module soon and my plan is to enhance it little by little… in fact I’ll add a feature every time I need it. If you want it faster, I’ll be happy to apply your patches :-)
A few weeks ago, I’ve started trying to use the free tablature editor TuxGuitar. It’s a clone of Guitar Pro written in Java and available for Linux, MacOSX and Windows. Basically, it works, but hell, this is really no fun to use. You start playing a riff on your guitar, then you want to translate it onto a tab and it takes ages before you get the perfect timing.
A friend showed me earlier a new software called melodyne, and have to say, it’s quite impressive. The idea is to read an audio file, analyze it and find out the notes that were played. The result is therefore a score and you can change the pitch and length of any note, transpose them to a new scale, etc. With this example, I wanted to show that we do have the technology to extract notes from an analogical music stream.
Indeed, we have it, and Roland did not wait for melodyne to make some interesting devices such as the GK-3. It’s a device you add to your guitar and it produces the corresponding MIDI signals. They have also made some devices to go along this one such as the GR-20, which can synthesize a plethora of instruments (e.g. sitar, piano, …).
Ok, but the downside of the Roland approach is that you have to buy and attach a device to your guitar. This device uses a dedicated microphone, which can read the strings independently, therefore simplifying the signal analysis. Moreover, the GK costs about $200 and it’s more than twice for the GR…
Back to the original topic, what I’d like to have is the ability to plug my guitar in the sound card, play my riff so that it writes the notes to the tablature editor as if I had a MIDI guitar… but without adding any device. This may be quite complex to be able to do realtime analysis, but I guess it’s worth trying.
Another much easier way to do that would be to, at least, have a way to tap the notes on the keyboard. Maybe you can already do that by using the MIDI input of the tablature editor and by using a virtual keyboard connected to it. If so, why isn’t this integrated into the tablature editors ? This is rather boring to use the mouse and the keypad to write music, really. So, basically, you would play FretsOnFire with your keyboard (using enter and F1/F2/Fx keys) to give the rhythm.
Anyway, since I’m quite lazy, I don’t want to use the mouse and keypad anymore for this task, so I’ll investigate these ideas a bit as soon as I can allocate time. Oh, and if you know about some software or combinaison of software that can do that, I’m really interested in knowing them (and also scientific publications).
Life, Code, DIY, Cooperation, Social Innovation, Computer Arts & Random Stuff.