The Need
For part of a Ruby project I've been working on, I needed a way to play audio files, and since I am using the project to teach myself the basics of Ruby programming, I decided to write my own audio player class utilizing the Gstreamer Multimedia framework.
Having written what is essentially the same code in both Vala and Python programming languages, I thought this would be fairly simple with the aid of the documentation.
RTFM (a rant)
The documentation for using Gstreamer with Ruby is part of the Ruby Gnome2 documentation and I found it to be dreadful to use. Since it is difficult to actually find a link to the Gstreamer related documents, I'll include a link http://ruby-gnome2.sourceforge.jp/hiki.cgi?Ruby/GStreamer
Honestly, I tried to read the documentation and it was so frustrating that I started to hate Ruby. What bothered me the most about the documentation wasn't the abundant amount of missing information, it was the 500 Server Error that I would see every 4 out of 5 clicks. Why someone thought it would be a good idea to server the files as a CGI wiki and not as good old static files is beyond me. Aaahhhhhhh! I hate that crap!
OK, time to relax and just look at some code.
Enter The Ruby
require 'thread'
require 'gst' #gem install gstreamer
#the gst namespace is Gst
#initialize gst
Gst.init
class Player
def initialize()
#create a thread for a glib main loop
thread = Thread.new() do
@mainloop = GLib::MainLoop.new
@mainloop.run
end
#make a few queries
@query_position = Gst::QueryPosition.new(Gst::Format::TIME)
@query_duration = Gst::QueryDuration.new(Gst::Format::TIME)
#make the playbin
@playbin = Gst::ElementFactory.make("playbin")
#get the playbins bus
bus = @playbin.bus
#watch the bus for messages
bus.add_watch do |bus, message|
handle_bus_message( message )
end
end
#we will need to get the current position and duration of the playbin
def position_duration()
begin
#run the query
@playbin.query( @query_position )
#what is that, picoseconds? I'll take milliseconds thank you.
position = position = @query_position.parse[1] / 1000000
@playbin.query( @query_duration )
duration = @query_duration.parse[1] / 1000000
rescue
position = 0
duration = 0
end
return {'position'=>position,'duration'=>duration}
end
def status()
#get the state
bin_state = @playbin.get_state
#isn't there a better way to convert the state to a string?
case bin_state[1]
when Gst::State::NULL
state = 'NULL'
when Gst::State::PAUSED
state = 'PAUSED'
when Gst::State::PLAYING
state = 'PLAYING'
when Gst::State::READY
state = 'READY'
end
volume = @playbin.get_property("volume")
uri = @playbin.get_property("uri")
pd = position_duration()
#return state, volume
status_hash = {'state'=>state, 'volume'=>volume, 'uri'=>uri}
#add the position and duration to the hash, and return
return status_hash.merge( pd )
end
#set or get the volume
def volume(val)
if !val.nil? and val>=0 and val<=1
@playbin.set_property("volume", val)
end
return @playbin.get_property("volume")
end
def seek_percent(val)
if !val.nil? and val>=0 and val<=1
pd = position_duration()
duration = pd['duration']
if duration > 0
seek_loc = val*duration * 1000000
seek = Gst::EventSeek.new(1.0, Gst::Format::Type::TIME, Gst::Seek::FLAG_FLUSH.to_i | Gst::Seek::FLAG_KEY_UNIT.to_i, Gst::Seek::TYPE_SET, seek_loc , Gst::Seek::TYPE_NONE, -1)
@playbin.send_event(seek)
end
end
return position_duration()
end
def quit()
@playbin.stop
@mainloop.quit
#I thought no one liked a quitter?
end
def set_uri(uri)
#null the playbin state
@playbin.set_state(Gst::State::NULL)
#set the uri
@playbin.set_property("uri",uri)
end
def play()
#really? just play
@playbin.play
end
def pause()
#really? just play
@playbin.pause
end
def handle_bus_message( message )
case message.type
when Gst::Message::Type::ERROR
#null the pipeline
@playbin.set_state(Gst::State::NULL)
#TODO: send a signal that playing is finished
when Gst::Message::Type::EOS
#null the pipeline
@playbin.set_state(Gst::State::NULL);
#TODO: send a signal that playing is finished
when Gst::Message::Type::TAG
tag_list = message.parse()
#we need to get the key and value from the tag
tag_list.each do |key,val|
#TODO: store some of this data
end
when Gst::Message::Type::STATE_CHANGED
state = @playbin.get_state
else
#what should we do?
end
#return true or shit breaks: why is this?
true
end
end
if \_\_FILE__ == $0
input = ARGV[0]
if input.match(/^http:\/\//)
#why the hell doesn't this work?
uri = input
else
uri = "file://"+File.absolute_path(ARGV[0])
end
player = Player.new
player.set_uri(uri)
player.play()
loop = true
sleep 1
while loop
puts "type 'quit' to quit"
s = $stdin.gets.chomp
if s.eql? "quit"
loop = false
end
end
player.quit()
end
For some reason the code will not play an audio file over HTTP and this bothered me for a bit, then I decided that I just don't care. One thing you may notice is that this class will create a new thread for running a GLib mainloop. Had this class been part of a larger project that uses a GLib mainloop, the new thread probably wouldn't be necessary, but hey, I'm not writing a GLib based project.
Quite a while ago, I made a basic metronome application and named it hubcap in honor of Linux Outlaws host Dan Lynch. Anyone that has ever heard me play music knows just how much I really need a metronome, but that is beyond the point, and I promise that this won't be some pointless April 1st crap.
Fortunately, the Linux Outlaws has two hosts and I had a yearning to hack code and process data.

Shnerkel is an aggregator and player of the Linux Outlaws ogg feed. Sauce: shnerkel.tar.gz
--Requirments--
gstreamer-0.10
gtk+-2.0
webkit-1.0
libxml-2.0
Vala ( for compiling )
The main impetus for creating this application was to play with webkit in Vala. As I see it, there are a few bonus results of creating this app.
1. Since the app plays the ogg version of the Linux Outlaws audcast, the statics for numbers of downloads of the mp3 and ogg versions will hopefully tip towards ogg.
2. It is a fairly easy way to increase the expose of Linux Outlaws and Ogg, although I'm probably preaching to the choir on both accounts.
3. Almost a full dozen people will have something real to read on April 1st.
The Good
With shnerkel, there is no more waiting to download the audcast. The audio file is streamed over HTTP by the gstreamer library. Shnerkel uses the same audio player class as sap, which really cut down on development time. Thanks Open Source.
The Bad
You will notice the lack of a progress bar. For some reason gstreamer doesn't return the duration of an ogg file being played over HTTP. What the hell is up with that? It is either a problem with Gstreamer or a problem with the Ogg format.
The Ugly
plenty. Before you complain, go look in the mirror. Oh snap! You got burned by that one! In the appwindow.vala file, I pull information from a GTK TreeStore as follows:
It seems to me that passing 'description' and 'file' to the function as references is rather un-vala like and the function should instead use 'out' to pass data to the strings. Oh well...string description=""; string file=""; tree_selection = tv.get_selection(); tree_selection.get_selected(out model, out iter); episodeTreeStore.get(iter,3,&description,2,&file,-1);
It's almost midnight, but I don't think I'll stay up and write the first page of my movie script for the ScriptFrenzy challenge.
Now quit reading, and go find the elusive Dirk Shnerkelberger.
#!/usr/bin/env python
import os
import sys
import gst
import gobject
class tag_getter:
def __init__(self):
#make a dictionary to hold our tag info
self.file_tags = {}
#make a playbin to parse the audio file
self.pbin = gst.element_factory_make("playbin")
#we need to receive signals from the playbin's bus
self.bus = self.pbin.get_bus()
#make sure we are watching the signals on the bus
self.bus.add_signal_watch()
#what do we do when a tag is part of the bus signal?
self.bus.connect("message::tag", self.bus_message_tag)
#create a loop to control our app
self.mainloop = gobject.MainLoop()
def bus_message_tag (self, bus, message):
#we received a tag message
taglist = message.parse_tag()
#put the keys in the dictionary
for key in taglist.keys():
self.file_tags[key] = taglist[key]
#for this test, if we have the artist tag, we can quit
if self.file_tags['artist']:
print self.file_tags
sys.exit()
def set_file(self,file):
#set the uri of the playbin to our audio file
self.pbin.set_property("uri","file://"+file)
#pause the playbin, we don't really need to play
self.pbin.set_state(gst.STATE_PAUSED)
def run(self):
#start the main loop
self.mainloop.run()
if __name__=="__main__":
if len(sys.argv)>1:
file = sys.argv[1]
pwd = os.getcwd()
filepath = os.path.join(pwd,file)
getter = tag_getter()
getter.set_file(file)
getter.run()
else:
print "select an audio file"
Useful? Not really, but it certainly is a good building block for a more advanced application.
if key == 'image':
img = open('temp.png', 'w')
img.write(taglist[key])
img.close()
It took me a while to wrangle with the tee requirements for handling queues. I could see how, but I couldn't understand why. So anyway, this is what I came up with:
1. a tee in the pipeline gets a name
2. the end of a queue gets declared as part of the tee, and is given the name of the tee followed by a period
3. add a queue to the gstreamer pipeline
4. the end of the queue thingy gets placed at the end of the queues ( this doesn't seem to be required for the last queue)
My gstreamer pipeline looks like this:
gst-launch
filesrc location=/path/to/audio/file
! decodebin ! audioconvert
! tee name=myT myT.
! queue ! autoaudiosink myT.
! queue ! goom ! ffmpegcolorspace ! autovideosink
sweet! Now on to a my pythonic version using pygst
#!/usr/bin/env python
import sys
import gst
import time
class myPlayer ():
def __init__(self):
self.pipeline = gst.Pipeline()
self.src = gst.element_factory_make("filesrc", "src")
self.decoder = gst.element_factory_make("decodebin", "decoder")
self.decoder.connect("new-decoded-pad", self.onNewDecodedPad)
self.goom = gst.element_factory_make("goom")
self.colorspace = gst.element_factory_make("ffmpegcolorspace","color")
self.conv = gst.element_factory_make("audioconvert", "conv")
self.vidsink = gst.element_factory_make("autovideosink","videosink")
self.asink = gst.element_factory_make("autoaudiosink", "aoutput")
self.tee = gst.element_factory_make('tee', "tee")
self.queuea = gst.element_factory_make("queue", "queuea")
self.queuev = gst.element_factory_make("queue", "queuev")
self.pipeline.add(self.src,self.decoder,self.conv,self.tee,self.queuea)
self.pipeline.add(self.asink,self.queuev,self.goom, self.colorspace, self.vidsink)
gst.element_link_many(self.src,self.decoder)
gst.element_link_many(self.conv,self.tee)
self.tee.link(self.queuea)
self.queuea.link(self.asink)
self.tee.link(self.queuev)
gst.element_link_many(self.queuev, self.goom,self.colorspace, self.vidsink)
def onNewDecodedPad(self,decodebin, pad, islast):
#link the pad to the converter
decodebin.link(self.conv)
def playfile(self,file):
self.src.set_property('location', file)
self.pipeline.set_state(gst.STATE_PLAYING)
pipelinestate = self.pipeline.get_state()
while pipelinestate[1] == gst.STATE_PLAYING:
time.sleep(1)
pipelinestate = self.pipeline.get_state()
sys.exit()
if __name__ == '__main__':
if (len(sys.argv) > 1):
file = sys.argv[1]
player = myPlayer()
player.playfile(file)
else:
print "you must select a tune"
The big difference here, at least to me, is that the decodebin isn't really a bin, but it represents a series of possible bins. So if one where to select a vorbis file to play, the decodebin will determine the correct type of bin needed to handle the file and would create an instance of that type of bin, the same is true for wav,flac,aac,mp3, etc; all of which have a specific decoder that I don't want to have to figure out, so I let the decodebin do it for me. This line: self.decoder.connect("new-decoded-pad", self.onNewDecodedPad), will call a function whenever a new bin is created by the decoder bin and it is in the onNewDecodedPad function that the decodebin links to the rest of the pipeline. Does that make sense?
"self.popeline.add" should be "self.pipeline.add"
Now this works on my...drumroll...Nokia N900! Maemo forever :-)
#!/bin/sh
gst-launch
filesrc location=/path/to/audio/file
! decodebin ! audioconvert
! tee name=myT
myT. ! queue ! autoaudiosink
myT. ! queue ! goom ! ffmpegcolorspace ! autovideosink
import sys, os, os.path, time
import pygst
pygst.require("0.1")
import gst
But I don't know how to add new version of GStreamer ... I try above one with change in version as
import gi
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst, Gtk
got error as :
import gi
File "/usr/lib/python2.7/dist-packages/gi/__init__.py", line 39, in <module>
raise ImportError(_static_binding_error)
ImportError: When using gi.repository you must not import static modules like "gobject". Please change all occurrences of "import gobject" to "from gi.repository import GObject". See: https://bugzilla.gnome.org/show_bug.cgi?id=709183
Please advise the right way..
thanks
Anes
You may want to use "playbin2" instead of "playbin".
Thanks again for the start here, hope you enjoy the equalizer.
First: add this to initialize()
bin = Gst::Bin.new()
@eq = Gst::ElementFactory.make("equalizer-10bands")
autosink = Gst::ElementFactory.make("autoaudiosink")
bin.add(@eq)
bin.add(autosink)
@eq >> autosink
eqpad = @eq.get_pad("sink")
gpad = Gst::GhostPad.new("gpad", eqpad) # playbin2 requires a ghost pad, not sure why
bin.add_pad(gpad)
@playbin.audio_sink = bin
Then add this function to the class:
#set or get the equalizer, pass in a hash of one or more bands { :band0 => 10, :band1 => 5, ... :band9 => 10}
def eq(bands = {})
b = {
:band0 => @eq.band0,
:band1 => @eq.band1,
:band2 => @eq.band2,
:band3 => @eq.band3,
:band4 => @eq.band4,
:band5 => @eq.band5,
:band6 => @eq.band6,
:band7 => @eq.band7,
:band8 => @eq.band8,
:band9 => @eq.band9
}.merge(bands)
@eq.band0 = b[:band0]
@eq.band1 = b[:band1]
@eq.band2 = b[:band2]
@eq.band3 = b[:band3]
@eq.band4 = b[:band4]
@eq.band5 = b[:band5]
@eq.band6 = b[:band6]
@eq.band7 = b[:band7]
@eq.band8 = b[:band8]
@eq.band9 = b[:band9]
return b
end