Marcin bloguje

.impressions.memos.tech.

Zookeeper + Curator = Distributed Sync

An application developed for one of my recent projects at TouK involved multiple servers. There was a requirement to ensure failover for the system’s components. Since I had already a few separate components I didn’t want to add more of that, and since there already was a Zookeeper ensemble running - required by one of the services, I’ve decided to go that way with my solution.

What is Zookeeper?

Just a crude distributed synchronization framework. However, it implements Paxos-style algorithms (http://en.wikipedia.org/wiki/Paxos_(computer_science)) to ensure no split-brain scenarios would occur. This is quite an important feature, since I don’t have to care about that kind of problems while using this app. You just need to create an ensemble of a couple of its instances - to ensure high availability. It is basically a virtual filesystem, with files, directories and stuff. One could ask why another filesystem? Well this one is a rather special one, especially for distributed systems. The reason why creating all the locking algorithms on top of Zookeeper is easy is its Ephemeral Nodes - which are just files that exist as long as connection for them exists. After it disconnects - such file disappears.

With such paradigms in place it’s fairly easy to create some high level algorithms for synchronization.

Having that in place, it can safely integrate multiple services ensuring loose coupling in a distributed way.

Zookeeper from developer’s POV

With all the base services for Zookeeper started, it seems there is nothing else, than just connect to it and start implementing necessary algorithms. Unfortunately, the API is quite basic and offers files and directories abstractions with the addition of different node type (file types) - ephemeral and sequence. It is also possible to watch a node for changes.

Using bare Zookeeper is hard!

Creating connections is tedious - and there is lots of things to take care of. Handling an established connection is hard - when establishing connection to ensemble, it’s necessary to negotiate a session also. During the whole process a number of exceptions can occur - these are “recoverable” exceptions, that can be gracefully handled and not break the connection.

So, Zookeeper API is hard.

Even if one is proficient with that API, then there come recipes. The reason for using Zookeeper is to be able to implement some more sophisticated algorithms on top of it. Unfortunately those aren’t trivial and it is again quite hard to implement them without bugs.

And since distributed systems are hard, why would anyone want another difficult to handle tool?

Enter Curator

Happily, guys from Netflix implemented a nice abstraction for dealing with Zookeeper internals. They called it Curator and use it extensively in the company’s environment. Curator offers consistent API for Zookeeper’s functionality. It even implements a couple of recipes for distributed systems.

File read/write

The basic use of Zookeeper is as a distributed configuration repository. For this scenario I only need read/write capabilities, to be able to write and read files from the Zookeeper filesystem. This code snippet writes a sample json to a file on ZK filesystem.


EnsurePath ensurePath = new EnsurePath(markerPath);
ensurePath.ensure(client.getZookeeperClient());
String json = “...”;
if (client.checkExists().forPath(statusFile(core)) != null)
     client.setData().forPath(statusFile(core), json.getBytes());
else
     client.create().forPath(statusFile(core), json.getBytes());


Distributed locking

Having multiple systems there may be a need of using an exclusive lock for some resource, or perhaps some big system requires it’s components to synchronize based on locks. This “recipe” is an ideal match for those situations.



lock = new InterProcessSemaphoreMutex(client, lockPath);
lock.acquire(5, TimeUnit.MINUTES);
… do sth …
lock.release();


 (from https://github.com/zygm0nt/curator-playground/blob/master/src/main/java/pl/touk/curator/LockingRemotely.java)

Sevice Advertisement

This is quite an interesting use case. With many small services on different servers it is not wise to exchange ip addresses and ports between them. When some of those services may go down, while other will try to replace them - the task gets even harder.

That’s why, with Zookeeper in place, it can be utilised as a registry of existing services.

If a service starts, it registers into the ServiceRegistry, offering basic information, like it’s purpose, role, address, and port.

Services that want to use a specific kind of service request an access to some instance. This way of configuring easily decouples services from their configuration.

Basically this scenario needs ? steps:

1. Service starts and registers its presence (https://github.com/zygm0nt/curator-playground/blob/master/src/main/java/pl/touk/curator/WorkerAdvertiser.java#L44):



ServiceDiscovery discovery = getDiscovery();
            discovery.start();
            ServiceInstance si = getInstance();
            log.info(si);
            discovery.registerService(si);



2. Another service - on another host or in another JVM on the same machine tries to discover who is implementing the service (https://github.com/zygm0nt/curator-playground/blob/master/src/main/java/pl/touk/curator/WorkerFinder.java#L50):


instances = discovery.queryForInstances(serviceName);

The whole concept here is ridiculously simple - the service advertising its presence just stores a file with its whereabouts. The service that is looking for service providers just look into specific directory and read stored definitions.

In my example, the structure advertised by services looks like this (+ some getters and constructor - the rest is here: https://github.com/zygm0nt/curator-playground/blob/master/src/main/java/pl/touk/model/WorkerMetadata.java):



public final class WorkerMetadata {
    private final UUID workerId;
    private final String listenAddress;
    private final int listenPort;
}


Source code

The above recipes are available in Curator library (http://curator.incubator.apache.org/). Recipes’ usage examples are in my github repo at https://github.com/zygm0nt/curator-playground

Conclusion

If you’re in need of a reliable platform for exchanging data and managing synchronization, and you need to do it in a distributed fashion - just choose Zookeeper. Then add Curator for the ease of using it. Enjoy!


  1. image comes from: http://www.flickr.com/photos/jfgallery/2993361148
  2. all source code fragments taken from this repo: https://github.com/zygm0nt/curator-playground

Operational Problems With Zookeeper

This post is a summary of what has been presented by Kathleen Ting on StrangeLoop conference. You can watch the original here: http://www.infoq.com/presentations/Misconfiguration-ZooKeeper

I’ve decided to put this selection here for quick reference.

Connection mismanagement

  • too many connections

      WARN [NIOServerCxn.Factory: 0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory@247] - Too many connections from /xx.x.xx.xxx - max is 60
    
  • running out of ZK connections?

    • set maxClientCnxns=200 in zoo.cfg
  • HBase client leaking connections?

    • fixed in HBASE-3777, HBASE-4773, HBASE-5466
    • manually close connections
  • connection closes prematurely

      ERROR: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to connect to ZooKeeper but the connection closes immediately.
    
  • in hbase-site.xml set hbase.zookeeper.recoverable.waittime=30000ms

  • pig hangs connecting to HBase

      WARN org.apache.zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectionException: Connection refused!
    

    CAUSE: location of ZK quorum is not known to Pig

    • use Pig 10, which includes PIG-2115
    • if there is an overlap between TaskTrackers and ZK quorum nodes
      • set hbase.zookeeper.quorum to final in hbase-site.xml
      • otherwise add hbaze.zoopeeker.quorum=hadoophbasemaster.lan:2181 in pig.properties

Time mismanagement

  • client session timed out

      INFO org.apache.zookeeper.server.ZooKeeperServer: Expiring session <id>, timeout of 40000ms exceeded
    
    • ZK and HBase need the same session timeout values
      • zoo.cfg: maxSession=Timeout=180000
      • hbase-site.xml: zookeeper.session.timeout=180000
    • don’t co-locate ZK with IO-intense DataNode or RegionServer
    • specify right amount of heap and tune GC flags
      • turn on parallel/CMS/incremental GC
  • clients lose connections

      WARN org.apache.zookeeper.ClientCnxn - Session <id> for server <name>, unexpected error, closing socket connection and attempting reconnect java.io.IOException: Broken pipe
    
    • don’t use SSD drive for ZK transaction log

Disk management

  • unable to load database - unable to run quorum server

      FATAL Unable to load database on disk !  java.io.IOException: Failed to process transaction type: 2 error: KeeperErrorCode = NoNode for <file> at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:152)!
    
    • archive and wipe /var/zookeeper/version-2 if other two ZK servers are running
  • unable to load database - unreasonable length exception

      FATAL Unable to load database on disk java.io.IOException: Unreasonable length = 1048583 at org.apache.jute.BinaryInputArchive.readBuffer(BinaryInputArchive.java:100)
    
    • server allows a client to set data larger than the server can read from disk
    • if a znode is not readable, increase jute.maxbuffer
      • look for "Packet len <xx> is out of range" in the client log
      • increase it by 20%
      • set in JVMFLAGS="-Djute.maxbuffer=yy" bin/zkCli.sh
      • fixed in ZOOKEEPER-1513
  • failure to follow leader

      WARN org.apache.zookeeper.server.quorum.Learner: Exception when following the leader java.net.SocketTimeoutException: Read timed out 
    
    CAUSE:
    • disk IO contention, network issues
    • ZK snapshot is too large (lots of ZK nodes)

    SOLVE:

    • reduce IO contention by putting dataDir on dedicated spindle
    • increase initLimit on all ZK servers and restart, see ZOOKEEPER-1521
    • monitor network

Best Practices

DOs

  • separate spindles for dataDir & dataLogDir
  • allocate 3 or 5 ZK servers
  • tune garbage collection
  • run zkCleanup.sh script via cron

DON’Ts

  • dont’ co-locate ZK with I/O intense DataNode or RegionServer
  • don’t use SSD drive for ZK transaction log

You may use Zookeeper as an observer - a non-voting member:

  • in zoo.cfg

      peerType=observer
    

After WHUG Meeting

Here are the slides from the talk a gave yesterday. If you have any questions, please ask.

WHUG 8. Beyond Hadoop - Checking Other Options

W najbliższy czwartek - czyli 29.11.2012 - poprowadzę prezentację w ramach Warsaw Hadoop User Group. Swoją obecność można odklinąć tu http://www.meetup.com/warsaw-hug/

A o czym będę mówił? Przeklejka ze strony WHUG:

Marcin skupi się na współpracy ekosystemu Hadoopa z innymi narzędziami. Pokaże jak prosto i wygodnie przetwarzać grafy i jak stosować podejście Big Data, w czasie rzeczywistym. Poruszy również temat łatwiejszego tworzenia algorytmów Map-Reduce

Będzie to nieco mniej technicza (ale wciąż praktyczna) wycieczka po obrzeżach tematyki, która jest zwykle poruszana w połączeniu z Hadoop-em.

Prezentacja będzie dotyczyć narzędzi takich jak Cascading, Storm, Titan.

Zapraszam!

Hadoop HA Setup

With the advent of Hadoop’s 2.x version, there finally is a working High-Availability solution. Even two of those. Now it really is easy to configure and use those solutions. It no longer require external components, like DRBD. It all is just neatly packed into Cloudera Hadoop distribution - the precursor of this solution.

Read on to find out how to use it.

Raspberry-pi

About a month ago I finally received my very own Raspberry Pi board! Don’t know what that is? Here, read some at their website.

For the sake of completness let me just describe that as prototyping platform with ARM processor. It is really similar in concept to what Arduino is, except it has not that many extensions available (none? or very little, I’ve only found those on Adafruit pages).

So, here is the obligatory picture.

My very own R-Pi

It can run a Linux distribution, so anyone familiar with that can have a go with this low-powered computer.

The board itself is on the market for quite some time now. That’s why there are lots of interesting resources and projects that you can do with that stuff.

Here are just a bunch of them:

Do you also own R-Pi? Share what you plan to do with it.

Hadoop for Enterprises

Hadoop’s usage as a big data processing framework gains a lot of attention lately. Now, not only big players see, that they can embrace the data their sites or products are generating and develop their businesses on it. For that to happen two things are needed: the data itself and means of processing really big amounts of it.

Gathering data is relatively easy. These are not necessarily structured data, you don’t need to plan their usage at first. Just start collecting them and than you may experiment with their potential usage. If they’ll come out as useless rubbish - deleting them won’t be hard But imagine the values it may contribute to your business:

  • faster services - working on optimized data
  • more clients - because of more relevant search results
  • happy clients - your service can “read their minds”
  • etc.

There are many companies that utilize Hadoop ecosystem for their own needs. You can read about some of them here: http://wiki.apache.org/hadoop/PoweredBy But since that page lacks insight into specific applications of Hadoop I’ve tried to delve into
details of how Hadoop helped tame some companies’ big data sets.

Facebook

Being a social network provider, a widely used one, they require no introduction. However if you’ve lived under a rock for last couple years just visit their website http://facebook.com

Their main usage is data warehousing. Since they require to be able to access the data fast and reliably they had a need for real-time querying of their huge, and always growing data set. Their switch from MySQL databases was required due to the increasing workloads they experienced with standard databases. What they got “out of the box” with Hadoop was all the benefits of distributed file system (HDFS features). They expanded the ideas behind that even further and implemented truly Highly Available file system without Single Point of Failure.

Facebook has 3 interesting usage scenarios in which Hadoop plays a major role:

  • Titan - is Facebook’s messaging system. It processes messages exchanged between users. Ensures that it happens fast and without glitches. Here Hadoop is used mainly as a huge, unlimited storage.
  • Puma - Facebook Insights - a tool providing page statistics for advanced Facebook users. Based on streams of data (clicks, likes, shares, comments and impressions) it graphs those data and makes it available near instantly.
  • ODS - Operational Data Store - which stores Facebook’s internal metrics - collections of OS and cluster health metrics. And it facilitates multiple accounting solutions.

Twitter

This popular micro-blogging platform, where you can register your account and follow friends and celebrities for their micro-messages does some pretty interesting things with their Hadoop cluster.

One of their motivations is to speed up their web-page’s functionality. That is why the compute users’ friendships in Twitter’s social graph with Hadoop. Using connections between users they calculate their relationship to each other and estimate groups of users.

Since this service’s users generate lots of content, the company conducts researches based on natural language processing. They probe what could be told about a user from his tweets. They use tweets’ contents for advertisement purpose, trends analysis and many more.

From tweets and user’s behaviours they characterise usage scenarios. Also, they gather usage statistics, like number of searches daily, number of tweets. Based on this seemingly irrelevant data they run comparisons of different types of users. Twitter analyzes data to determine whether mobile users, users who use third party clients or power users use Twitter differently from average users. Of course theses seem like really specific applications but nevertheless they are very original and base on the data that Twitter has been gathering for some time now.

EBay

Being the biggest auctioning site on the Internet, EBay uses Hadoop processing for increasing search relevance based on click-stream data, user data. This seems pretty obvious, considering their area of operation.

However the also have one other interesting thing - they try hard to automatically fill auctioned objects’ metadata, based on the descriptions and other data provided by users. They employ data mining approach for this tasks and judging from their constant growth it seems to work

LinkedIn

Social network for professionals, thou a lot smaller than Facebook. Based on click-streams they discover relations between users. All the data concerning latest visits on your profile or people you may know from other places - this comes from Hadoop based analysis of those clicks people make all the time on their sites.

Also a very neat feature, called InMaps (http://inmaps.linkedinlabs.com/) analyse declared schools and companies and generates data for graph with clustered friends of yours.

Last.fm

This on-line radio site, praised by many for its invaluable recommendations’ system seems like a rather small and simple service. But behind the facade of simple web page there are lots of data being processed, so that their services could match a certain level of perfection.

Such large volume of their data comes from scrobbles. Each users of their service listening to a song generates a note about this fact - called scrobble. Based on that and user profiles they calculate global band popularity charts, maps of bands’ popularity and many more usage statistics and timeline charts.

Conclusion

They just try to detect and trace new patterns in seemingly chaotic data sets. Perhaps you could also do the same? Analyze your data and expand your business value?

Comments

We stumbled over here from a different web address and thought I might check things out.
I like what I see so i am just following you.
Look forward to checking out your web page yet again.

I like what you guys are up too. This type of clever work and reporting!

Keep up the awesome works guys I’ve added you guys to my own blogroll.

Greetings from Florida! I’m bored at work so I decided to browse your site on my iphone during lunch break. I enjoy the info you present here and can’t wait to take a look
when I get home. I’m surprised at how quick your blog loaded on my cell phone .. I’m not even using WIFI, just 3G .
. Anyways, very good site!

Comfortableness <a href="http://www.salethenorthfacejackets.com">north face jackets</a>
is crucial when they get it that will <a href="http://www.salethenorthfacejackets.com">north face outlet</a> get the best school bags pertaining to going camping <a href="http://www.salethenorthfacejackets.com">north face sale</a>. Your easiest guarantee in the case of even larger delivers has become One with an inner metal framework, one that can wind <a href="http://www.salethenorthfacejackets.com">cheap north face</a> up being aligned to help you appropriately fit your <a href="http://www.salethenorthfacejackets.com/the-north-face-women-1">north face women</a> body. They should be now have http://www.salethenorthfacejackets.com secure which were wholly flexible, because essentially in the form of midsection belt to get more aid.

I never imagined how much stuff there was out there
on this! Thanks for making it easy to get the picture

What Programming Languages Do Jobs Require? | Regular Geek regulargeek.com/2009/07/21/what-programming-languages-do-jobs-require view page cahecd As a software engineer, you need to keep your skills sharp and current. This is a general requirement of the job. In addition to this, in the current economy you do not want to be without a job. Obviously, this means learning more about what your current company uses for all of its development. What if you do not have a job or you are looking to leave? What technologies or programming languages should you be looking into? From the page

Howdy are using Wordpress for your site platform? I’m new to the blog world but I’m trying to
get started and create my own. Do you need any coding expertise to make your own
blog? Any help would be greatly appreciated!

SoapUI Ext Libs and Its Weirdness

Suppose you want to add some additional jars to your SoapUI installation. It all should work ok if you put them in bin/ext directory. It is scanned at startup, and jars found there are automatically added to classpath.

However if you want to add some JDBC drivers, and happen to be using SoapUI version higher than 3.5.1 it is a bit more tricky.

You may face this NoClassDefFoundError:

An error occured [oracle/jdbc/Driver], see error log for details
java.lang.NoClassDefFoundError: oracle/jdbc/Driver

If so, try registering your drivers with registerJdbcDriver function, like I did in this snippet of code:

What a crappy thing!

Comments

You can definitely see your expertise in the work you write.
The world hopes for more passionate writers such as you who are not afraid to
say how they believe. Always go after your heart.

It’s going to be end of mine day, however before finish I am reading this fantastic paragraph to increase my experience.

My family every time say that I am wasting my time
here at net, except I know I am getting knowledge every day by reading such pleasant articles.

Thanks , I’ve just been looking for info approximately this topic for a long time and yours is the greatest I have found out till now. But, what concerning the bottom line? Are you positive in regards to the source?

Nice post. I was checking continuously this blog and I am impressed!
Extremely helpful information specially the last part
:) I care for such information much. I was looking for this
particular info for a long time. Thank you and best of luck.

What Is NoSQL Good For?

… or how I ended up writing a CouchDB proof of concept app?

Once upon a time I set out on a journey to discover the NoSQL land. I’ve decided that doing simple queries wouldn’t be interesting enough. That’s why I’ve chose to create an app that would be based on some NoSQL database.

The main idea was to create an app, that would dynamically update itself with geographic data flowing in. Since there are myriads of geo-data that are available on the internet, you can pick your favorite one and load them into your SQL database of choice.

In my case the primary source of data was a proprietary database, or more specifically - one table in it continuously updated with new data. To make that data visible on my map I needed to:

  • buffer the huge amount of those records - so as not to overhoul other services with large traffic, and not to flood the frontend
  • convert then to my representation
  • display them - have presentation layer in a browser - since browser-based frontend was the easiest and fastest to develop

The idea of the front-end HTML page was to show new points on the map. From the moment of opening the page records that appear in database table should be shown interactively on the screen.

Toys used

For the first step I chose to use RabbitMQ broker. A queue on the broker would receive messages - one message per database table’s row. Then I’d use some simple groovy middle ware to convert the data to appropriate format and put it onto another db - this time db specific to my app.

You may ask why incorporate another database. It would be good for separating environments - assuming the original data contains some vulnerable content that should be anatomised, or we just don’t feel comfortable exposing the whole database of some XYZ-system just to have access to its one table.

Since for my presentation layer I chose HTML+JS without any application server-based back-end I’ve decided on CouchDB . This seemed like a perfect match for this scenario. Why? - ease of use, REST API, with JSON responses - just great for interacting with my simple front-end.

The flow of things was as shown on the image below:

diagram

Avro - for the beginning

As you can see, I’ve chosen JSON as my data-format. I’ve been considering Apache Avro in the first place but using it was a real pain in the ass. Avro itself is used in Apache Hadoop as a serialization layer, so it would seem OK, but it has virtually no documentation. But once you tear through the unintuitive interface and manage to handle all those unthinkable exceptions you get a few pros for this library. It’s great in that it does not require code generation - I like it being made on the fly. It also offers sending data in binary format, which was not necessary, but never the less is a nice feature.

What I certainly didn’t like about it was its orientation on the files rather than chunks of data - so it was not so obvious how should I send data through the wire.

Than I found out it can produce JSON output, which would work for me, except the output could not have been parsed by other JSON libraries :) (I’ve asked on stackoverflow about that, but with no luck).

If my whining haven’t put you back and still would like to see how to use Avro, try this unit test in project’s GitHub repo: AvroSimpleTest.groovy

Svenson

I’ve dropped Avro in favour of a simple JSON lib called (Svenson and that was painless. The only thing I was forced to do was create my model class in Java - the rest of the project is written in Groovy. I’ve no idea why was that necessary, and didn’t want to look into it.

RabbitMQ

Further on the way is RabbitMQ, to which records are filled by a feeding middle-ware written in Groovy. Since I use ActiveMQ on a day-to-day basis, I’ve decided to try something new. This broker is a really nice piece of software. Being written in Erlang makes it really fast. What’s more it has some extensive capabilities and is easy to approach for anyone similar with messaging (JMS and friends). For such a lightweight product it is really powerful - implements AMQP!

CouchDB

From the broker’s queue messages are again fetched by a middle-ware just to be put into CouchDB view. This database is also written in Erlang. It’s very reliable, however the way it handles refreshing view isn’t the most pleasant one - performance-wise.

Word of advice - if you’re on Debian derivative, be cautious with apt-repository version. It’s rather _ancient_. Also remember to add allow_jsonp = true to you config file /opt/couchbase/etc/couchdb/local.ini. It’s not enabled by default, and not having this set would result with empty responses from the CouchDB server.

The problem here is, that the browser doesn’t allow quering a web server with hostname other than the one the script originates. More on this case here. Seems like my problem could be overcame by changing url in index.html and hostname couchdb listens on to the same address.

I’ve also created a view, that would expose an event by key: view code

Presenting the dots

As a back-end I’ve done some JQuery based AJAX calls - nothing too fancy. All things necessary for presentation layer are in this file.

Things to consider

Please bear in mind that this whole application is rather a playground, not a full-fledged project!! After creating all the parts I have some doubts about some architectural decisions I made. I don’t think the security have been taken into account seriously enough. Also scalability was never an issue ;-)

If you have some thoughts about any of the aspects mentioned in this post, please feel free to comment or contact me directly :)

And also you may try the application by yourself - it’s on the GitHub.

Comments

@Piotrek, here is a link to JIRA ticket concerning this feature. I think it is being discussed ATM: https://issues.apache.org/jira/browse/COUCHDB-431

About Same Origin Policy - now there’s Cross Origin Resource Sharing available in most of common browsers. It should help You if CouchDB has support for it.

@klausa, thanks for your advice. I’ve made some changes to the post.

>The main idea was to create an app, that would dynamically update itself with geographic data flowing in.

Not to nitpick, but that doesn’t seem like an idea for app. I think you should explain what that displayed data is here. If you moved your ‘Presenting the dots’ paragraph just above ‘Toys used’, it would be clear what do you wanted to do with this app.

>Also remember to add allow_jsonp = true to you config file /opt/couchbase/etc/couchdb/local.ini.

I think you should explain what that option *really* does.

Other than that, nice post!

5 Best Things to Do With Your Kindle

I bought a Kindle (3rd generation, Wi-Fi only) some time ago - like half a year ago. Read some books, done some web-browsing (awful, quite unpleasant). Gradually I became more and more curious of other things possible to achieve with this slate-looking piece of tech. These are my thoughts and ideas.

Got a Kindle? Use it every day? Feel like modding or extending your ways of usage? Great! Read on, and share your thoughts in comments!

  1. Readability! - This web app is great! Generally this is a simple plug-in for your browser that will show a little button somewhere on the toolbar, and if you click it, the page you’re reading now will be transformed into nice and sleek content-only page. Look on the screen below:

    This plug-ins additional function is sending to Kindle account. That’s the nicest way to read those loads of RSS-sources articles :) The only limitation is that graphics won’t be included if resulting file would exceed allowed size of kindle documents - that’s 2MB AFAIR.
  2. Install some hacks! - be that serious hacks or rather some simple software modifications:
    • read all book formats with Calibre - link
    • play Zork on a Kindle!!! - link
    • alternative Kindle keyboard - link
    • custom fonts - link
  3. Install custom screen-savers - do this to be able to install your own images. …because you’ve always wanted to have some other things on screen when your kindle is in standby mode. Of course, the original screen-savers look great, but there are only few. Installing this hack gave me an opportunity to have a multitude of new images. Now my Kindle looks even better!
  4. Try out Chinese kindle software - doukan.com - As a matter of fact, I haven’t installed that software yet. It doesn’t look good enough for me, and has some minor problems. However this is great, that there is actually some other option - I’m not forced to use the official firmware. And this distribution has many nice features like PDF reflow.
  5. Enable Chinese fonts support on your kindle - damn! I’d like a simple, step by step tutorial on how to set up chinese fonts on a kindle. I’d like to put some font file on my device, fire some chinese book and be able to see the actual characters..
  6. Programming for Kindle - with Kindle Official SDK - well, not quite! - unfortunately this is reserved only for the Chosen Ones. I’ve applied for the SDK but they haven’t sent me my developer key yet, and it’s been ~2 months. This is not “being supportive” or “supporting the community”.

And how do You use your Kindle? Perhaps you’re doing some serious, crazy things with it? Share your thoughts!

Comments

Zgłosiłem się po kindlowe SDK prawie rok temu i niiic, cisza. Widać nie jestem dość cool, by dać mi tę zabawkę do ręki :)

Co do książek, to fakt, DRM wszędzie. Ale DRM w ebookach działa jak każdy inny (czyli marnie – da się zdjąć DRM Empiku, Amazona, itp.), więc użytkownik z odrobiną zacięcia da radę.

PS. Mój kindel postanowił wyzionąć ducha jakoś w zeszłym tygodniu, na 10 dni przed upływem gwarancji. Kindle znajomej padł ciut (tydzień-dwa?) wcześniej. Amazon bez szemrania wysyła nowe, ale… nie sposób oprzeć mi się uczuciu, że te urządzonka były obliczone na rok życia. A przynajmniej pierwsza seria z preorderów, obecne są (mam nadzieję) już trwalsze.

Dzięki za odpowiedzi :)

Też kupowałem kindle ~6 m-cy temu więc się wtrącę:

Ad 1. Ja kupowałem bezpośrednio w Amazonie i nie zapłaciłem VAT-u (OIDP cła na elektronikę z USA nie ma).
Ad 2. Przeglądarka IMHO z JS radzi sobie całkiem dobrze, ale jest cholernie wolna i nawigacja jest niewygodna.
Ad 3. Domyślnie tylko WPA2-PSK, ale jest tam nomalny wpa supplicant więc można edytować sobie konfig i szaleć.
Ad 4. Ja całkiem sporo czytam i ładuje raz na miesiąc, może minimalnie częściej.
Ad 5. Można kupować z Amazona, ja kupuje z Amazon UK, bo kiedy konfigurowałem Kindle to miał niższe ceny książek. Co do DRM to obsługuje tylko swój DRM (czyba azw, czyli mobi + Amazonowy DRM), z innych trzeba zdjąć DRM i skonwertować na obsługiwany format (Calibre rulez!).

Okej, no więc po koleji:

1. Kupowałem bezpośrednio na Amazon - tyle że na amerykańskim, bo tylko z tamtego ślą Kindle do Polski. Uważam, że to jest najtańsza możliwa opcja. Cło opłaca Amazon, Ty niczym się nie przejmujesz, wszystko jest zrobione za Ciebie. Cała impreza kosztowała mnie coś koło 400zł (Kindle 3 wifi only). Z tego co widziałem to na Allegro jest zdecydowanie drożej.

2. Przeglądanie stron na Kindlu to tylko w razie naprawdę dużej potrzeby. Mi się nie podoba, wyświetlacz jest na tyle mało responsywny, że swobodne surfowanie po sieci jest niewykonalne. Jak musisz koniecznie coś sprawdzić, to sprawdzisz, ale dla przyjemności to raczej w ten sposób się tego nie robi ;-)

3. Nie mam dostępu do WPA2 z Radiusem. Używam na WPA2 z PSK - i działa bez zarzutu. Może pogooglaj gdzieś?

4. To prawda, trzyma miesiąc, tylko trzeba pamiętać żeby Wi-Fi wyłączać, bo nawet na standbaju zrzera baterię.

5. W Polsce można bez problemu kupować książki z Amazona (nadal, przez Wifi, bo przez 3G to nie wiem). Co do Polskich sklepów, to o ile oferują wspierane przez Kindla formaty, to nie powinno być problemu. Ja osobiście raczej mało książek kupuję na Kindla - korzystam z ogólno dostępnej klasyki + mam osobno kupione PDFy itp. Generalnie nie przeczytasz żadnych książek w pub’ach ani tym podobnych formatach. Aczkolwiek są na to haki (między innymi chinski software, o którym pisałem).

W każdym razie polecam zakup, bo naprawdę warto - chyba że wolałbyś coś w stylu IPada (kolory, łatwe surfowanie), to wtedy Kindle nie jest dla Ciebie :)

Sorry że po polsku, ale przymierzam się do kupna Kindla, i mam parę pytań, wybacz jeśli zaśmiecam ci notkę:

1. Gdzie kupowałeś bezpośrednio na Amazonie czy przez pośrednika z Allegro, jak z cłem i innymi podatkami?
2. Przeglądarka w Kindle 3 podobno na webkicie, jak w praktyce, dobrze sobie radzi ze stronami, co z JSem?
3. WiFi obsługuje szyfrowanie WPA2 korporacyjne z serwerem RADIUS, czy tylko wersję WPA2 z PSK?
4. Jak z bateryjką, słyszałem że miesiąc daje rady, prawda to?
5. Można w Polandii kupować w Amazonie książki do Kindla? Są jakieś polskie sklepy z polskimi legalnymi ksiażkami, które później bez problemów wrzucę do Kindla, czy przez DRM nie da rady?