News about my FOSS projects

Monday, December 5, 2011

@Wandisco sponsors Bloodhound, a fork of Trac

Trac becomes Bloodhound?

Even if you still don't know, it is a fact that I'm a convinced  Python advocate and developer. I like programming and I also think that  Trac is one of the most well-designed systems out there. I have implemented  some plugins myself (hope you like them ;). Besides it's pretty famous and there are countless online installations . Nonetheless in the last few years its development was not as fast as many of us would have wanted . There are some reasons for this to happen . Firstly  Edgewall (the company behind  Trac) is not what it used to be once upon a time. Besides  the site hosting many plugins needs an upgrade since long time ago . In my opinion it'd be nice to migrate the VCS so as to use a distributed system , let's say either  Mercurial or  Git (I vote for the former ;) . According to my personal experience it also happens that plugin developers might not have enough support to dedicate time to make their ideas a reality, and decide to take another job. Recently I got some good news . There's strong interest to support its development . The whole thing started few months ago when a message was sent to  trac-users and  trac-dev mailing lists. Now, thanks to  Wandisco, this request led to  a proposal named Bloodhound willing to support its development, but incubated under the  Apache (ASF) umbrella. So far the idea is to create a friendly fork of the core  Trac distribution and also package some useful and successful  plugins. I'm so excited about the idea. Let's see how it goes. Thanks to all those who made this possible.

Tuesday, August 30, 2011

TuxInfo - Introduction to functional programming in Python

 TuxInfo 40 ready for download!!! ¿Why Androoid is number 1?

Recently I wrote (together with  Arnau Sánchez) an article on functional programming in Python. It's been published by  TuxInfo magazine. This is the most recent of a series pursuing the goal of introducing starters to the art of programming in Python, and very particularly to illustrate language support for multiple programming paradigms. All of you interested on subjects related to computer programming (and able to understand Spanish ;o) are invited to read it. Comments are welcome. Should you access this  page to download TuxInfo 40. If you prefer consult previous editions in  the archive.

Before the end it's necessary to send kudos to TuxInfo team who have achieved the major goal of keeping the magazine between the preferences of free software fans after 40th volume. Congratulations Ariel Corgatelli et al.


Sunday, July 10, 2011

Appeartowork - Use Facebook while you appear to work

Enjoy Facebook while you appear to work

Some days ago, the 8th of July 2011 at 21:00 h (more or less ;o) I could finally deploy the web site  Appeartowork . The underlying idea is simple: let users access their  Facebook accounts while they appear to work. Nowadays there are three flavours. Firstly there's a skin similar to  MS Word. Everybody will be wondering what kind of report you'll be writing ;o). On the other hand it's also possible for users to pretend they are using  MS Excel . Oh my! How many complex formula in such a lovely stylesheet ! :P. Finally geeks and gurus are not alone. How could we forget them ? There's a lovely console there for you to type. If something is wrong just type rm -Rv / and press Enter. Take a deep breath, you'll just see that message posted later to your wall rather than losing all your valuable data. Best part is that your boss is never gonna know, and encourage you saying something like Cool! Keep up the good work. Great idea. Needless to say you'll get a bonus by the end of the month thanks to your productivity and impecable behavior :o).

And that's it ! Even if there are a few details to polish, it's an interesting and funny idea for you to try. Hope you all like it !

You know. Your angry boss is after you to know when you contact friends while being on duty? Just try  Appeartowork ;o).

Thursday, April 15, 2010

On adding AMF (RPC) support for Trac

SAN JOSE, CA - JANUARY 15: The Adobe logo is ...Image by Getty Images via Daylife Try experimental features in Trac XmlRpcPlugin

Another important milestone. Few months after Odd Simon Simonsen invited me to help in developing the very successful and useful plugin TracXmlRpc plugin, and few days after implementing Hessian support for Trac, it's time to focus on Action Message Format. Yes ! Things move fast because it's so easy.

Action Message Format is a binary protocol designed by Macromedia, actually part of Adobe Systems. It's goal is to serialize and deserialize ActionScript objects in order to exchange data between an Adobe Flash application (e.g. Adobe Flash Player ) and a remote service, usually over the internet. In other words it allows using Flash Remoting MX together with ActionScript so as to develop interactive applications , in this particular, by retrieving the data managed by any Trac environment. Few minutes ago I finally could invoke remote services via this protocol using a Python script. The details ...

Subscribe using RSS

... I don't want to reveal all the details since the implementation of the underlying API is not completely stable or ready to be released and probably not a few details will change in a near future. However this example helped to figure out how some method should exchange information needed to process the RPC request.

Well, the whole story starts with an enhancement request and a patch submitted to plugin TracXmlRpc issue tracker by thijs . At that moment the only protocols supported were XML-RPC, JSON-RPC. The former was fully supported by standard xmlrpclib.ServerProxy, whereas the later could be used by installing libraries like wsgi-jsonrpc. On the other side there was no way to add further protocols by installing third-party plugins. In a near future this will be possible. The component I am talking about is consistent with that approach. In this particular case, thijs also provided an example based on PyAMF library. I just needed to tweak it a little in order to get this script :

#!/usr/bin/env python

AMF client for the Trac RPC plugin.

import base64
from optparse import OptionParser
from socket import error
import sys

from pyamf.remoting import RemotingError
from pyamf.remoting.client import RemotingService

p = OptionParser()

p.add_option("-U", dest="username",
                help="Trac USER", metavar="USER")
p.add_option("-P", dest="password", metavar="PASSWORD",
                help="use PASSWORD to authenticate USER")
p.add_option("-e", dest="env", metavar="URL",
                help="base URL of target environment")

(opts, args) = p.parse_args()

username = opts.username
password = opts.password
try :
url = opts.env + '/rpc'
except :
sys.exit("Missing Trac environment. Use -e option.")

service_name = 'system'

gw = RemotingService(url)
if (username, password) != (None, None):
auth = base64.encodestring('%s:%s' % (username, password))[:-1]
gw.addHTTPHeader("Authorization", "Basic %s" % auth)

service = gw.getService(service_name)

  print service.getAPIVersion()
except RemotingError, e:
  print e
except error, e:
  print e[1]

This script shows the installed version of the plugin . As you can see if the base URL of the Trac environment is at e.g. http://localhost/trac then the protocol is available at http://localhost/trac/rpc. In fact this URL can be used to access all (well, most of ;o) the active protocols installed in a given environment. Protocol selection at that path is based on Content-Type header supplied by the client in the HTTP request (e.g. application/x-amf for AMF). That's the main requisite considered by Odd in order to design the underlying API , and it's great !!! .

In order to see it action, you just need to open a console and type

$ python ./ -U username -P mypassword -e http://localhost/trac
[1, 1, 0]

That's it! . Below you can see a simplified version using just the Python interpreter.

>>> import base64
>>> from pyamf.remoting import RemotingError
>>> from pyamf.remoting.client import RemotingService
>>> username, password = 'olemis', 'mypassword'
>>> url = 'http://localhost/trac/rpc'
>>> gw = RemotingService(url)
>>> auth = base64.encodestring('%s:%s' % (username, password))[:-1]
>>> gw.addHTTPHeader("Authorization", "Basic %s" % auth)
>>> service = gw.getService('system')
>>> print service.getAPIVersion()
[1, 1, 0]

Isn't it simple ? All these implementations will be offered soon by TracRpcProtocols. The underlying API needs to be released before that, of course ;o). If you'd like to follow the development of this features then I invite you to subscribe to the RSS feed. Adding AMF support for Trac is just the second episode of the first season. Thanks osimons and thijs ;o) .

Reblog this post [with Zemanta]

Friday, March 5, 2010

On adding Hessian (RPC) support for Trac

Duke, the Java Mascot, in the waving pose. Duk...Image via Wikipedia Multi-protocol Trac RPC API

Yesterday was a historical date. In the morning I finally could request the data managed by a Trac instance via Hessian RPC. The whole story ? Well ... Few months ago Odd Simon Simonsen invited me to participate and send patches to the very popular and successful XmlRpcPlugin. In order to do that he cloned the SVN repository using Hgsvn, published its contents at Bitbucket and created an MQ repository. We are developing new patches in there. Right now we are both working on an API to support RPC calls via multiple protocols (beyond built-in XML-RPC and JSON-RPC implementations ;o). We needed a protocol to experiment with and tune up the current prototype. Hessian was the target for this research.

Subscribe using RSS

Hessian is a binary, dynamic, and very efficient RPC protocol. It's very popular in Java. Nonetheless (unlike others like RMI ...) it can be implemented on top of any language, and that includes Python as well. It's dynamic nature (similar to XML-RPC) helps with the bindings.

So far I'd not like to unveil the implementation details and the impact this might have on the underlying API (because changes will be applied soon ;o), but I'd like to show how to request data to Trac using this protocol. It's very simple !

>>> from hessian.client import HessianProxy as HSP
>>> auth = {'username' : 'olemis', 'password' : 'canhurtyoureyes'}
>>> hsp = HSP('http://localhost/trac/newprj/hessian', auth)
>>> getattr(hsp, 'system.getAPIVersion')()
[1, 1, 0]

That's it ! If you'd like to follow the development of this features then I invite you to subscribe to the RSS feed. Adding Hessian support for Trac is just the beginning to raise a wonderful plugins ecosystem ;o) . You should be able to access your data the way you want !

Reblog this post [with Zemanta]

Thursday, January 21, 2010

TracGViz plugin downloaded more than 1000 times (> 300 from PyPI)

TracGViz plugin

Among the Trac plugins I've implemented so far TracGViz has an special place. Recently, this module has surpassed the 300 downloads from PyPI. This is only a fraction of the total because it is possible to download it from Softpedia (518 downloads for Mac and 203 downloads for Linux) and from WareSeeker (12 downloads). In total 1055 downloads have been made, considering the 322 from PyPI. But who knows, maybe there are other sites that offer similar services. However these numbers (note that I am only speaking of version 1.3.4 ;o) are enough to fulfill my expectations. For more details, please read this short entry and consult the  documentation in the project site.

This plugin integrates the PMS with the technology known as Google Visualization API to enhance the Trac wiki with mini-applications of all kinds. It also allows to display and process data managed by a project environment, but on sites located beyond the domain where the management system is located. As a side effect it extends the Trac RPC API providing access (create, read, execute) to the reports, to data about the source code and its versions, and details of the events recorded during the lifetime of the project.

This is one of the huge welcoming signs for Go...Image via Wikipedia

Currently (i.e. up to version 1.3.4) an optimistic estimate indicates that this tool can satisfy only about 10% percent of the use cases for this Google technology. However, version 1.4.1 will be ready soon and it will cover about an 80% percent, and especially some of the most common tasks. The improvements provide new features for users, support to display multiple views of the same data using any of the available widgets as well as integration with other systems. Several tests show that its operation is stable.

A practical example will be posted soon on this blog. So my advice is to follow the new blog articles. I hope they will be useful for you.

Reblog this post [with Zemanta]

Monday, December 21, 2009

Assessment of unittest 2.7 API : new features and opinions

Yesterday I found the time to review the new features introduced in unittest module for Python 2.7a. In this post I will mention what's new and express my opinion about the current semantics of the decorators included in there. I invite you to discover the reasons that make me think that they may be confusing. Perhaps you will agree with me that some enhacements will be very helpul or, otherwise, that current documentation has to be more explicit and explain the tricky part of it. I am concerned about your health : let's prevent your headaches. If you are not very familiar with unittest module for Python 2.7a I recommend you to read an article about standard Python testing frameworks, and an overview of the xUnit paradigm.

What's new in unittest (Python 2.7a) ?

In this version the API has been extended. In order to get this done the former implementation was patched. This means that the new features are included in the same classes you used before. If read until the end you will understand why I don't like this approach. On the other hand unittest module for Python 2.7a becomes a package. It contains the following modules :

  • __init__: Provides access to the core classes implemented in sub-modules
  • __main__: Main entry point. Wraps main.
  • case: Everything you need in order to write functional test cases.
  • loader: Test loaders can be found here.
  • main: unittest module for Python 2.7a main program. Used to run tests from the command line.
  • result: Contains core test results.
  • runner: Includes test runners and their (specific) test results
  • suite: Includes suites and classes used to group test cases
  • util: Various utility functions

Here I just wanted to mention that I think it would be nice to have a module named unittest.exceptions in order to define all unittest-specific exceptions in there.

The first modification is that most of fail* methods have been deprecated. The reason to do this is to use a common and less redundant vocabulary. Possitive assertions starting with assert prefix still remain. Some of them have been enhanced. For instance, it is possible to register type specific equality functions that will be invoked by assertRaises and will generate a more useful default error message. On the other in this version you will also find new assertion methods. The ones I find more useful are the following :

  • assertMultiLineEqual(self, first, second, msg=None): Assert that two multiline strings are equal
  • assertRegexpMatches(text, regexp[, msg=None]): Very useful. Check that a pattern (i.e. regular expression) matches a text.
  • assertSameElements(expected, actual, msg=None): Check that two sequences contain the same elements.
  • assertSetEqual(set1, set2, msg=None): Compare two sets and fail if not equal. However it fails if either of set1 or set2 does not have a set.difference() method.
  • assertDictEqual(expected, actual, msg=None): Very useful. Fail if two dictionaries are not equal, considering their key/value pairs.
  • assertDictContainsSubset(expected, actual, msg=None): Fails if the key/value pairs in dictionary actual are not a superset of those in expected.
  • assertListEqual(list1, list2, msg=None) and assertTupleEqual(tuple1, tuple2, msg=None): Similar to assertSameElements but considers the repetitions and the ordering of items in a tuple or list.
  • assertSequenceEqual(seq1, seq2, msg=None, seq_type=None): Similar to assertListEqual and assertTupleEqual but can be used with any sequence.

Here you have an example. If you set the test case instance's longMessage attribute to true then the following detailed messages are also shown even if you specify your own. Otherwise the custom message overrides the buit-in message.

from unittest import TestCase, main

class AlwaysFailTestCase(TestCase):
def test_assertMultiLineEqual(self):
l1 = r"""This is my long
very long
looooooooong line
l2 = r"""
This one is not so long ;o)
self.assertMultiLineEqual(l1, l2)
def test_assertRegexpMatches(self):
# Verbatim copy of trac.notification.EMAIL_LOOKALIKE_PATTERN
# the local part
r"[a-zA-Z0-9.'=+_-]+" '@'
# the domain name part (RFC:1035)
'(?:[a-zA-Z0-9_-]+\.)+' # labels (but also allow '_')
'[a-zA-Z](?:[-a-zA-Z\d]*[a-zA-Z\d])?' # TLD
self.assertRegexpMatches("", EMAIL_LOOKALIKE_PATTERN,
                     "This one is ok")
self.assertRegexpMatches("This is not an email address",

def test_assertSameElements(self):
self.assertSameElements([1,2,2,2,1], [1,2]) # ok
self.assertSameElements([1,2,3,2,1], [1,2])
def test_assertSetEqual(self):
self.assertSetEqual(set([1,2,3]), set([1,4]))
def test_assertDictEqual(self):
self.assertDictEqual({1: 1, 2: 2, 3: 3}, {2: 2, 3: 3, 1: 1})  # ok
self.assertDictEqual({1: None}, {1: False})
def test_assertDictContainsSubset(self):
self.assertDictContainsSubset({1: 1}, {1: 1, 2: 2, 3: 3}) # ok
self.assertDictContainsSubset({1: 1, 4: 4}, {1: 1, 2: 2, 3: 3},
                         "(4,4) is missing")
def test_assertListEqual(self):
self.assertListEqual([1,2,3,4], [1,3,2,4])
def test_assertTupleEqual(self):
self.assertListEqual((1,2,3,4), (1,3,2,4))


Let's pay attention to the results

test_assertDictContainsSubset (__main__.AlwaysFailTestCase) ... ERROR
test_assertDictEqual (__main__.AlwaysFailTestCase) ... FAIL
test_assertListEqual (__main__.AlwaysFailTestCase) ... FAIL
test_assertMultiLineEqual (__main__.AlwaysFailTestCase) ... FAIL
test_assertRegexpMatches (__main__.AlwaysFailTestCase) ... FAIL
test_assertSameElements (__main__.AlwaysFailTestCase) ... FAIL
test_assertSetEqual (__main__.AlwaysFailTestCase) ... FAIL
test_assertTupleEqual (__main__.AlwaysFailTestCase) ... FAIL

ERROR: test_assertDictContainsSubset (__main__.AlwaysFailTestCase)
Traceback (most recent call last):
File "<stdin>", line 36, in test_assertDictContainsSubset
File "/usr/bin/python2.7/lib/unittest/", line 723, in assertDictContainsSubset
standardMsg = 'Missing: %r' % ','.join(missing)
TypeError: sequence item 0: expected string, int found

FAIL: test_assertDictEqual (__main__.AlwaysFailTestCase)
Traceback (most recent call last):
File "<stdin>", line 32, in test_assertDictEqual
- {1: None}
+ {1: False}

FAIL: test_assertListEqual (__main__.AlwaysFailTestCase)
Traceback (most recent call last):
File "<stdin>", line 38, in test_assertListEqual
AssertionError: Lists differ: [1, 2, 3, 4] != [1, 3, 2, 4]

First differing element 1:

- [1, 2, 3, 4]
?        ---

+ [1, 3, 2, 4]
?     +++

FAIL: test_assertMultiLineEqual (__main__.AlwaysFailTestCase)
Traceback (most recent call last):
File "<stdin>", line 10, in test_assertMultiLineEqual
+     This one is not so long ;o)
- This is my long
-     very long
-     looooooooong line

FAIL: test_assertRegexpMatches (__main__.AlwaysFailTestCase)
Traceback (most recent call last):
File "<stdin>", line 23, in test_assertRegexpMatches
AssertionError: Regexp didn't match: "[a-zA-Z0-9.'=+_-]+@(?:[a-zA-Z0-9_-]+\\.)+[a-zA-Z](?:[-a-zA-Z\\d]*[a-zA-Z\\d])?" not found in 'This is not an email address'

FAIL: test_assertSameElements (__main__.AlwaysFailTestCase)
Traceback (most recent call last):
File "<stdin>", line 27, in test_assertSameElements
AssertionError: Expected, but missing:

FAIL: test_assertSetEqual (__main__.AlwaysFailTestCase)
Traceback (most recent call last):
File "<stdin>", line 29, in test_assertSetEqual
AssertionError: Items in the first set but not the second:
Items in the second set but not the first:

FAIL: test_assertTupleEqual (__main__.AlwaysFailTestCase)
Traceback (most recent call last):
File "<stdin>", line 40, in test_assertTupleEqual
AssertionError: First sequence is not a list: (1, 2, 3, 4)

Ran 8 tests in 0.015s

FAILED (failures=7, errors=1)

Have you noticed anything weird ? Well, yes, assertDictContainsSubset method has a bug. The former implementation (rather than this one) was very stable and testers could close their eyes and write their tests. This one needs to be improved yet (this is one of the things I don't like).

TestResult class has been enhanced too. The methods startTestRun() and stopTestRun() have been added. They are very useful. The former allows to prepare and activate the resources needed for reporting purposes before starting the whole test run; whereas the later may be used to tear them down.

Now we have test discovery in the standard library. You can use, pattern='test*.py', top_level_dir=None) method so as to find and return all test modules from the specified start directory, recursing into subdirectories to find them. Only test files that match pattern will be loaded. It relies on a hook known as the load_tests protocol ... but I am not really very fond of this implementation. I prefer to use dutest.PackageTestLoader, but it seems that guys don't like it ...

Finally the command line interface now allows to run all the tests defined in a module or a class. Formerly it was only possible to run individual test methods.

Advanced features in TestCase class

PixelBlocks/001/PythonImage by ptshello via Flickr

Another feature introduced in this version is resource deallocation. Let'sconsider that you are writing your test cases and you have some brances and loops in your code. In one such branch you allocate some resources that are not used in the other branches. In this case your test case might not allocate the resource at all, but if it does then you need to release it once everything's done. By using addCleanup(function[, *args[, **kwargs]]) you can add a function to be called after tearDown() to cleanup resources used during the test. This can help you since you won't need to write complex rules checking if the resource is allocated or not. These functions will be called in reverse order to the order they are added (LIFO). They are called with any arguments and keyword arguments passed into addCleanup() when they are added. All this process takes places inside doCleanups() method.

Custom test outcomes

A feature that coders demand more and more is support for custom test outcomes. This is very important because sometimes testers need to bind test data with test cases, and reload it after test execution in order to perform post-mortem analysis. The new API support expected failures, unexpected successes and skipped test cases. What's the syntax ?

Test methods can be skipped by using a skip decorator. According to the documentation basic skipping looks like this :

class MyTestCase(unittest.TestCase):

@unittest.skip("demonstrating skipping")
def test_nothing(self):"shouldn't happen")

@unittest.skipIf(mylib.__version__ < (1, 3),
              "not supported in this library version")
def test_format(self):
 # Tests that work for only a certain version of the library.

@unittest.skipUnless(sys.platform.startswith("win"), "requires Windows")
def test_windows_support(self):
 # windows specific testing code

As you can see every condition specified in the example shown above is static. In other words, if you evaluate it over and over the result should be the same. What happens if you need to skip a test if it is run after a given time. You might think that the example will fix your problem, but it won't:

from unittest import TestCase, main, skipIf
from datetime import datetime, time

class TimedTestCase(TestCase):
@skipIf( > time(0, 7), "Timeout")
def test_nop(self):
ts =
self.assertEqual(1, 0, "Current time %s" % (ts,))


Take a look at the following three results

test_nop (__main__.TimedTestCase) ... FAIL

FAIL: test_nop (__main__.TimedTestCase)
Traceback (most recent call last):
File "<stdin>", line 5, in test_nop
AssertionError: Current time 00:05:57.390000

Ran 1 test in 0.000s

FAILED (failures=1)

test_nop (__main__.TimedTestCase) ... FAIL

FAIL: test_nop (__main__.TimedTestCase)
Traceback (most recent call last):
File "<stdin>", line 5, in test_nop
AssertionError: Current time 00:07:04.109000

Ran 1 test in 0.000s

FAILED (failures=1)

test_nop (__main__.TimedTestCase) ... skipped 'Timeout'

Ran 1 test in 0.000s

OK (skipped=1)

The first one and the last one behave just like expected. However the second is run at after the deadline and the test case is not skipped. What's wrong ? In order to answer that question we need to dive into the xUnit paradigm and pay attention to the steps involved in the whole process. If you have not read the an article article about standard Python testing frameworks or if you forgot, well, here we go. Firstly test case classes are created when their declarations (i.e. class statements) are executed. Next the test cases running specific test methods are instantiated by TestLoaders. Finally the test cases are executed thereby performing assertions about the behavior of the system under test. If you think that the condition supplied in to skipIf is evaluated at this time then you are wrong. The second example fails because the test class was created before 12:07 AM and the condition was evaluated at this moment. So the decorator decided to execute the test case. When it was run at 12:07:04.109000 AM the decision was already made. On the other hand in the other two examples both events happen respectively before and after the deadline. If you want to know my personal suggestion is BE CAREFUL.

Well, finally, at the TestResult level there is a separate list for each novel test outcome (i.e. skipped, expectedFailures, unexpectedSuccesses). Each one contains binary tuples of TestCase instances and informative messages (i.e. strings). Entries are appended to those lists by calling one of TestResult.addSkip(test, reason), addExpectedFailure(test, err), addUnexpectedSuccess(test).


My opinion is that the basic unittest classes we all know have been there since so long and are very stable. Therefore you can rely on them in order to write your test cases. On the other hand the former API covered >80% of the test cases you will write. I agree with all those saying that it is nice to include further tools in the standard library to support the 20% that's left, or to enhance the interoperability between third-party frameworks providing solutions for sophisticated requirements. Nonetheless the current implementation introduces the following issues:

  • Additional and more complex semantics, and specially beyond the scope of basic testing concepts.
  • Monolithic design. Suppose that other (useful) features will be added in upcoming versions so as to cover the 10% not covered yet. Will we modified the core classes once again in order to get it done ?
  • All this means that it's not extensible.
  • Potential incompatibilities with previous versions (and|or) new bugs introduced, and users won't be able to fall back to the former stable classes.
  • The semantics of decorators are tricky, and they do not include annotations in the target test cases.
  • It does not offer a common (stable and extensible) infraestruture to report custom test outcomes.
  • It's not possible to determine whether test cases are skipped due to an error condition (i.e. a failure) or a recoverable situation (i.e. a warning).

I do think that the new functionalities are very important. However I still consider that it is better to leave the classes just like they were before and incorporate further details in new extensions built on top of them. In my opinion, that's the spirit of object-oriented APIs like xUnit frameworks. Let's find the way to make it better !

What do you think ? Will you join us ?

Reblog this post [with Zemanta]

Friday, October 16, 2009

Gasol in for Spain: they lead european basket

Pau GasolImage by lubright via Flickr

Gasol, the big brother, he is a tremendous success almost paranormal. He has suction pads in his fingers and his opponents know that he hypnotizes the ball. Like a magnet, he always intercepts it before reaching the net. Who can even dare to put the ball into the basket? And you better don't do it, otherwise he goes to your own field, makes you look silly and lowers your self-esteem after completing an alley-oop especially dedicated for you. ¡You will feel ashamed!

Please listen : don't get mad, that's not the solution. Have you ever reconsidered what's the point in doing all that? Anyway if he isn't the one then anybody in the team will do it. They will bombard you shooting far beyond the three-points line, or they will pass the ball fast as hell bringing all the fantasy you could imagine in order to score. There's no other way, you will be out and full of faults. Then you'll realize that you were merely the most active person in the crowd rather than the field. That's why Spain basketball team is the new european champion (for the first time) whereas Serbia couldn't make it. For Pau Gasol his recent success in NBA was just one of the first milestones.
Reblog this post [with Zemanta]

Thursday, September 24, 2009

More than 300 downloads for dutest module

Download dutest module

¿Have you ever noticed how fast children become men and women? Of course you have ! And I consider that my FOSS projects are just like my children. Their goal is that everybody be able to be more efficient and happy. Hence I wanted to share with you this glorious moment: users have downloaded dutest more than 300 times. Isn't it wonderful ? I know, I know ... I have only written a very concise article about that minimalistic framework. That's why I plan to write a more tutorial entries about it as soon as I can.

Well you know this is a very brief entry, this doesn't means that it's not important, but instead that I have no free time at all :(. I just want to share my happiness with you, and thank all those who tried it out and use it.

Before the end I invite you all to visit Simelo's projects site and the FLiOOPS project site, so that you can learn a little more about all my projects. I hope you like them and wish to be back to you soon, here in Simelo's blog.