Python: Drinking the Tokyo Kool-Aid

April 16, 2009 § 1 Comment

After reading what LightCloud can do, of course, it’s only natural to create object that serialized to Tokyo.

And that exactly what I did. The project (called Hail) is still infant, but the profile tests already answers some of my questions and curiosity about LightCloud (and Tokyo).

One obvious weakness I need to tackle: Serializing is too slow.

Questions that got answered:

  • Slowness is not caused by the size of the object, instead it is caused by number of items.
  • LightCloud does execute a lot of function calls. Most of the are really fast though.
  • EDIT: Tokyo is fast! Especially after I compare it with Memcache. But LightCloud is not. Tokyo is not as fast as I thought… but this is not final thought, I should create profile_test on raw tokyo tyrant node. On top of that LightCloud overhead is not negligible.
  • Serializing to cjson is faster than cPickle. That’s surprising.

Next, I should test getting items from both memcache and tokyo. I’m expecting it to be really fast.

References:

Beards of Python

April 4, 2009 § Leave a comment

I believe these are PyCon09 attendants.

How fast is Python dictionary?

March 30, 2009 § Leave a comment

Based on textbook theory: O(1).

I was about to do a profile test on it, but found this discussion on the mailing list. One poster claim 0.2 seconds per 1 million keys.

Perfect.

Even Matz love Python

March 27, 2009 § Leave a comment

Matz love Python

Matz love Python

Python: cPickle vs ConfigParser vs Shelve Performance

March 26, 2009 § 4 Comments

I need to store large number of key-values to map my Python objects. These key-values DO NOT have to be replicated across multiple servers and  the project DOES NOT require external storage systems such as RDBMS or Berkeley DB or others. The least external dependencies the better.

That leads me to cPickle vs ConfigParser vs Shelve. cPickle is obvious contender, it is fast and easy to use.

ConfigParser is an interface for writing config file, but its format is very key-value ish, so it counts.

Shelve is obvious too, because of its interface.

So I ran profiler test using hot shot and here’s the result:


Profile: Saving 100000 key-value to pickle file
700001 function calls in 2.330 CPU seconds


Profile: Extracting 100000 key-value from pickle file
4 function calls in 0.258 CPU seconds


Profile: Saving 100000 key-value in ConfigParser file
900004 function calls in 2.502 CPU seconds


Profile: Extracting 100000 key-value from ConfigParser file
300007 function calls in 1.936 CPU seconds


Profile: Saving 100000 key-value to shelve file
1300047 function calls (1300045 primitive calls) in 10.091 CPU seconds


Profile: Extracting 100000 key-value from shelve file
500027 function calls in 6.527 CPU seconds

From the results:

  • Shelve is disappointingly slow. It execute 1,300,047 calls???
  • cPickle is not bad at all. As expected, it performs really quick.
  • ConfigParser is the biggest surprise here, I was expecting it to be much slower.

Side Notes:

  • I use threading.Lock before setting the key-value to prevent resource contention (which is real life case).
  • Any improvements is greatly appreciated. Especially different data storage that I’m not aware of.
  • Code can be found here.

Tokyo Cabinet/Tyrant for Python Programmers

March 18, 2009 § 4 Comments

1. Tokyo Tyrant Tutorial

2. Tokyo Tyrant Documentation

3. Starting Tyrant Server as Daemon: (casket.tch is the database)

ttserver -dmn -pid /tmp/ttserver.pid /tmp/casket.tch

4. Best documentation on how to install Tokyo Cabinet & Tyrant: http://openwferu.rubyforge.org/tokyo.html

I personally go the git route.

5. To get the ‘distributed‘ feature, use Memcache Client to connect to Tokyo Tyrant server. The default port is 1978.

6. In my opinion, Tokyo Tyrant works best as Cache/Session/find_by_id solutions.

References:

PyPy: Assembler Wanted

March 11, 2009 § Leave a comment

March has been a good month for PyPy project. In this post, their JIT is 20x faster than CPython. The code is below:


i = 0
while i < 10000000:
i = i + 1

From the blog post(s), the JIT is generated via tracing the code’s bytecode. The post said that most of their crashes happened because of unsupported operator on the assembler back-end.

Since, PyPy project allows multiple back-ends (well… that’s the whole point), having more assembly developers working on the JIT would be a great thing to realize its speed potential.

(I know, I know, the point of PyPy project is not just for speed, but there are a lot of people who are enthusiastic for this reason)

So, if you know friends who know assembly, spread the words…

Where Am I?

You are currently browsing entries tagged with python at RAPD.

Follow

Get every new post delivered to your Inbox.