August 25, 2009 § Leave a Comment
First of all, I love Tokyo and already use Tokyo as secondary database for Pylons development and so far it has been a great success.
Since my box does not have a lot of memory, and a lot more disk space, it make sense to use Tokyo as caching solution instead of memcache.
Quick googling revealed that Jack Hsu has already implemented beaker’s Tokyo extension. His snippet works out of the box.
For my use case, I change the serializing strategy to using pickle instead of json. My reasoning is that pickle allows serializing complex object and I don’t have requirement for portability on cache data.
You can find the Tokyo extension here. I added Redis extension as well since it is very similar to Tokyo.
Edit (08/26/2009): Added extension for Dynomite
Edit (08/26/2009): Added extension for Ringo
[fixing bad layout]
April 21, 2009 § Leave a Comment
I’m currently benchmarking Moneta’s Tokyo vs Redis vs Memcache.
Results so far:
- Surprisingly, the Tokyo implementation is significantly faster. Even against memcache implementation. Why?
- Similar to LightCloud benchmark, the size of value does not affect speed of storing/getting.
April 16, 2009 § 1 Comment
After reading what LightCloud can do, of course, it’s only natural to create object that serialized to Tokyo.
And that exactly what I did. The project (called Hail) is still infant, but the profile tests already answers some of my questions and curiosity about LightCloud (and Tokyo).
One obvious weakness I need to tackle: Serializing is too slow.
Questions that got answered:
- Slowness is not caused by the size of the object, instead it is caused by number of items.
- LightCloud does execute a lot of function calls. Most of the are really fast though.
- EDIT: Tokyo is fast! Especially after I compare it with Memcache. But LightCloud is not. Tokyo is not as fast as I thought… but this is not final thought, I should create profile_test on raw tokyo tyrant node. On top of that LightCloud overhead is not negligible.
- Serializing to cjson is faster than cPickle. That’s surprising.
Next, I should test getting items from both memcache and tokyo. I’m expecting it to be really fast.
March 18, 2009 § 4 Comments
1. Tokyo Tyrant Tutorial
2. Tokyo Tyrant Documentation
3. Starting Tyrant Server as Daemon: (casket.tch is the database)
ttserver -dmn -pid /tmp/ttserver.pid /tmp/casket.tch
4. Best documentation on how to install Tokyo Cabinet & Tyrant: http://openwferu.rubyforge.org/tokyo.html
I personally go the git route.
5. To get the ‘distributed‘ feature, use Memcache Client to connect to Tokyo Tyrant server. The default port is 1978.
6. In my opinion, Tokyo Tyrant works best as Cache/Session/find_by_id solutions.