Some time ago I’d become obsessed with one game. It kept me up late, fed my …">

Marcin bloguje

.impressions.memos.tech.

Deus Ex - What a Game!

Some time ago I’d become obsessed with one game. It kept me up late, fed my imagination, sucked all my free time - so basically did what good game should. It was not a new production, quite the contrary -it’s release date was year 2000. And the game itself was Deus Ex.

The year is 2050 and you wake up as a rookie named JC Denton - you are an UNATCO (United Nations Anti Terrorism CoOperation) special agent. In the beginning your orders are clear and you’re there to fight the terrorists. The first mission takes place in the depths of The Statue of Liberty, and it gets better further into the game. The plot wanders, there seem to be many foreign forces entangled, and it may become clear to you, that actually you’re being used as a tool - a blunt one. Gradually the plot unfolds and you become aware of the whole conspiracy theory thing. You interact with Majestic 12 and Illuminati secret organizations and become part of a global terrorist warfare.

As for a game released so many years ago, even before 11 September 2001 - and the terrorist’s attacks on WTC - the game is astonishingly predictive. It envisions cyberwarfare, the rise of China’s economic importance, threads of terrorism and the whole range of technological topics. Well, mainly some sci-fi visions that became reality the following years since game’s release ;-)

Deus Ex has extremely well thought world. Everything is well laid out and there are no major glitches. It also offers multiple solutions to almost each puzzle - you can fight your way through, but you can try to bribe, steal, sneak and stunt. And the engine supports these actions perfectly (I’ve tried sneaking when playing Fallout: New Vegas but failed miserably). The myriad of solutions are for you to choose from. It would be even better if you could influence the plot in even greater, more substantial degree - but perhaps it would be too much to ask for.

Of course the game has also some weak sides. Because it’s a little bit dated the graphics are not up to today’s standards. The other thing is that the world seems not too densely populated - but as a matter of fact it may be because of 2000’s hardware limitations.

I got the game from Deus Ex Trilogy Pack (3 games in it, just for 20 PLN). The pack contains also other parts called: Deus Ex: Invisible War and Project Snowblind. Invisible War is still an RPG, but Project Snowblind is just a regular FPS - really daunting. I’ve played the original Deus Ex on Linux with Wine - it worked very well and I haven’t experience any problems. I’ve tried the same approach with Invisible War, but it seems sooooo buugy that I turned it off after just an hour. Perhaps will try to run it under Windows some other time - first to see if the bugs are gone :)

Oh, and there is also a third part coming, which will be a prequel to the first game. It will be called Deus Ex: Human Revolution. Go see some trailers and gameplays on You-tube. It seems well worth the time and money it will cost.

UPDATED: some links from comments below:

Comments

@chester - a to niezłe, wygląda super. Szkoda że dopiero teraz się o tym dowiedziałem. Przecież nie będę grał jeszcze raz tylko dla tekstur

@Zal - Of course! This is a great phenomenon. A pleasent one.

A ja dodam tyle: http://www.offtopicproductions.com/hdtp/ :)
acz nie próbowałem tego

aż chyba spróbuję :) pozdro

In my opinion first Deus Ex is much better than the next part. I can’t wait for Deus Ex: Human Revolution and I hope that it won’t disappoint us, Deus Ex lovers :D

Do you know that every time somebody mention Deus Ex, someone will reinstall it? ;]

JMS Redelivery With ActiveMQ and Servicemix

The other day I felt a compelling need to implement a JMS redelivery scenario. The exact scenario I’d been trying to handle was:

  1. my message is in an ActiveMQ queue or topic
  2. its processing fails, because of some exception - ie. database access exception due to server nonavailability
  3. since we get an exception, the message is not handled properly, we may want to retry processing attempt some time later
  4. of course, for the redelivery to happen we need the message to stay in the ActiveMQ queue - fetching messages from the queue will be stopped until the redelivery succeeds or expires
See how this can be done after the jump :)

For this to happen, I’ve tried implementing Apache Camel route, but as it turns out, Camel fails to deliver facilities for exact JMS redelivery. It is possible to set JMS connection in transacted mode, but the redeliveries happen one after another and fixed times.

What I’ve ended up doing was implement a servicemix-jms endpoint. I’ve used this configuration for it:



    
        
            activemq/connectionFactory
        
    

    
        
            activemq/resourceAdapter
        
    

    
        
        
        
        
    

    
    

As you can see, we lookup a couple of things in JNDI registry, so you need to have them configured on the Servicemix side - a sample config presented farther in this entry.

The bean responsible for configuring redelivery settings is activationSpec. You can set various things with it, like:

  • initial redelivery delay
  • maximum number of redeliveries
  • backoff multiplier

What is really important in jms:endpoint config for this to work are:

  • processorName=”jca”
  • rollbackOnError=”true”

Servicemix should have the following entries in its jndi registry:

         

         

(...) 

       xmlns:jencks="http://jencks.org/2.0"
       xmlns:amqra="http://activemq.apache.org/schema/ra" -->

          
          
          

When the redeliveries are exhausted, message is routed to global Dead Letter Queue called ActiveMQ.DLQ. Since this is a single bag for all the failed messages from all queues, you may want to configure this aspect differently. For example you can tell ActiveMQ to create a single DLQ for each queue. Use this config to achieve it - the changes should be made to Broker configuration.


  
    
      
        
        
          
            
            
          
        
      
    
  
  ...

More on the subject of redelivieries in ActiveMQ can be found at http://activemq.apache.org/message-redelivery-and-dlq-handling.html.

Easier and Nicer JMS

JMS seems like a hostile ground. It has all it’s quirks and strange behaviours. A couple of defining standards plus esoteric brokers, queues and topics.

At work, we mainly use open source Jms solutions, namely Apache ActiveMQ. This one is usually bundled with Apache Servicemix, as a message broker for this particular ESB. As there are some minor caveats in this scennerio, I’d like to describe here some guidelines for getting to running JMS queues.

Treat this post as a quick cheat sheet with the most common things about JMS I tend to forget :)

Minor glitches encountered during work with embedded broker led to some thoughts about switching to external broker. This is how I configure SMX and AcviteMQ.

Necessary steps:

  • change apache-servicemix/conf/servicemix.properties activemq.port to sth else than standard, for example 61626
  • change apache-activemq/conf/activemq.xml with this settings:
    • change port, the service listens on:
              
                  
              
      
    • setup separate JMX instance:
              
                  
              
      
  • the nicest tool I found for browsing queues and topics is Hermes JMS. Sample config, that connects Hermes to ActiveMQ instance is on the picture below: HermesJMS to ActiveMQ connection config
  • sending simple messages with Hermes is basic, but what if you need to set some headers, send bulk messages, etc. Easy, just use Hermes xml format. Look like this code snippet below and is rather self-explanatory:
    
        
            
                
                
      
        
          105
          1235
        
      
    ]]>
            
        
    
    
  • since we use lots of Apache Camel to consume messages, here is a simple way to start broker in your tests:
    • start a broker
              BrokerService broker = new org.apache.activemq.broker.BrokerService();
              broker.setBrokerName("AMQ-1");
              broker.addConnector("tcp://localhost:51616");
              broker.setPersistent(false);
              broker.start();
      
      Notice it has persistance disabled.
    • initialize Camel’s JMS component:
          ctx.removeComponent("jms");
          ctx.addComponent("jms", ActiveMQComponent.activeMQComponent("tcp://localhost:51616"));
      
    • if you want to pass messages to reference endpoints, (like ref:input), use this wrapper method:
      private JmsEndpoint createJmsEndpoint(String endpoint) throws JMSException {
              ActiveMQComponent amqc = (ActiveMQComponent) ctx.getComponent("jms");
              JmsEndpoint endp = JmsEndpoint.newInstance(new ActiveMQTopic(endpoint), amqc);
              return endp;
      }
      
      createJmsEndpoint("ESB/XYZ")
      
These are all the tricks I’ve got for now! But if you know some other good tools that handle JMS, feel free to comment! Got more advices, again, comment!

Best Pigsticking, EVER!

This looks like the best pigsticking ever! Look at that little girl! The blood, butcher’s knife, and child’s soother :)

Photo from Jamie Oliver’s cookbook, “Jamie’s Italy”. Available on Amazon.

Schematron to the Rescue!

In an ideal world all the standards fit well into their places. It is sufficient to use just one serious standard, because all the problems can be solved with it - the standardization processes is there for some reason. But that happens only in ideal world, which we’re not living in.

In ideal world, when dealing with XML instances you’d be more than fulfilled using XML Schema, or RelaxNG, or any other simple xml formal definition language to declare your data structure. With that you get rigid rules as to how XML documents should look like. There doesn’t seem to be much space to deviate from specs. Well, in fact there is.

The main problem of XML, aside its verbosity, is the inability to create concise rules for the input or output document as a whole. Perhaps it’s a nice feature, because XML Schema should only be used to describe a data structure, not to infer business rules on it. Perhaps not. Nevertheless it’s not what I needed in one of the projects I’ve worked on.

My need was to actually check the business validity of such documents. This was used in a Web Service environment, a pretty stupid WS, which sole role was to fetch data from database and pack it into appropriate XML structures. Errors might occur in database’s views or in WS - as usual. They might be data multiplication or appearance of some elements while they shouldn’t. Resulting documents were correctly validated with the xml schema, but the result was simply wrong from the business point of view.

What I needed an XML formalization language, an ability to write rules that would assert some rules, report on not meeting stated rules. I was in need of a tool to write business rules to tame such XML entities.

The simplest way I found to solve this was to use Schematron! - “a language for making assertions about patterns found in XML documents”. This neat tool is a set of XSL templates, that you use in conjunction with a rule set on documents to check. As a result of the check you get another XML document with test assertions - whether failed or succeeded.

With Schematron you write a set of rules you expect the document to assert, than you use Schematron XSL template to produce XSL rules specific for your case. Now you only need to use newly generated XSL rules template on your XML document to check rules compliance. Easy, if not, check the diagram below.

How does it look?

The rules’ file may look like this:


  TouK Schematron test harness

    



  checking GetMigrationOffers
  
    Report date.
    Unique offers allowed.
    Each offer has to have an @abc attribute 
  
  
    Each offer has to have a tariff
    Each offer has to have a promotion
  



  checking GetAllPhones
  
    
        TACs should be unique. TAC: , 
        handsetId: 
        offerId: 
    
  



Here we see two rules, one named getMigrationOffers and the other getAllPhones. The rules - mainly their asserts seem pretty self explanatory, but for the sake of completeness I’ll describe the rules for getAllPhones.

There is one rule, which checks the uniqueness of tac elements. This rule tries to ensure that each handset should have a list of unique tac elements as its children. However there may appear tac elements of the same value in different handset elements.

Given an input XML in the form of:


   
      
         
            12028006
            20070705
            35535302
            01216100
            01216100
         
         
            12028006
            20070705
            35535302
            01216100
         
         
            12028006
            20070705
            35535302
            01216100
         
       
   

And passing those two files through the processing pipeline you get a report:


    
   
   
   
   
   
   
   
   
      
        TACs should be unique. TAC: 01216100, 
        handsetId: 95
        offerId: 103021
   
   
   
      
        TACs should be unique. TAC: 01216100, 
        handsetId: 95
        offerId: 103021
   
   
   
   
   
[...]

After running the validation, the report presents us with the result. It shows that there are actually non-unique tacs. Unfortunately the rule itself is not so optimal, as it is executed for each tac node. The better case would be to create a rule operating only on groups of tacs - having a rule for each handset’s tacs would be much better.

Performance consideration

As you may have seen, Schematron gives quite a potential, if it comes to rules building - maybe not the easiest to comprehend, since written with XPath, but good enough.

However, with all the XML processing involved in the process, it may take some considerable amount of time to execute such validations. For example, processing rules for file getMigrationOffers.xml takes about 2.296s - the file has 82 offer elements, which the rules operate on. But validating the other file, getAllPhones.xml takes 5.324s, with 3113 tac elements, and the rule iterating all of them.

This overhead is too much in most of the situations. That’s why this solution is rather not for use in normal execution pipeline - it would be unwise to put Schematron to check each request, thus entangle it into my Web Services normal flow.

What may be more desirable is to deploy a continuous integration server, with a project querying such Web Service and checking the rules in this manner.

Conclusion

So, what’s so great about having one XML generate another XML? Perhaps nothing, I think it would took just about a day to write some shell, python, <other text processing tool> that would perform equally (or even better). However, we loose technology homogeneity, and employ some other environments, not specific to our primary target platform, and that seems bad. Of course using some powerful text processing tool to impose the same rules might be much more efficient, thou less coherent.

What is your approach to such situations? Have you used Schematron or any other similar tool?

Code for this example is available on GitHub - http://github.com/zygm0nt/schematron-example.

Complex Flows With Apache Camel

At work, we’re mainly integrating services and systems, and since we’re on a constant lookout for new, better technologies, ways to do things easier, make them more sustainable, we’re trying to

Usually we use Apache Camel for this task, which is a Swiss-knife for integration engineer. What’s more, this tools corresponds well with our approach to integration solutions:

  • try to operate on XML messages, so you get the advantage of XPaths, XSL and other benefits,
  • don’t convert XML into Java classes back and forth and be worried with problems like XML conversion,
  • try to get a simple flow of the process.

However, at first sight Apache Camel seems to have some drawbacks mainly in the area of practical solutions ;-). It’s very handy tool if you need to use it as a pipeline with some marginal processing of the data that passes through it. It gets a lot harder to wrap your head around if you consider some branching and intermediate calls to external services. This may be tricky to write properly in Camel’s DSL.

Here is a simple pipeline example:

And here the exact scenario we’re discussing:

What I’d like to show is the solution to this problem. Well, if you’re using a recent version of Camel this may be easier, a little different, but should still more-or-less work this way. This code is written for Apache Camel 1.4 - a rather antic version, but that’s what we’re forced to use. Oh, well.

Ok, enough whining!

So, I create a test class to illustrate the case. The route defined in TestRouter class is responsible for:

  1. receiving input
  2. setting exchange property to a given xpath, which effectively is the name of the first XML element in the input stream
  3. than, the input data is sent to three different external services, each of them replies with some fictional data - notice routes a, b and c. The SimpleContentSetter processor is just for responding with a given text.
  4. the response from all three services is somehow processed by RequestEnricher bean, which is described below
  5. eventually the exchange is logged in specified category

Here is some code for this:

public class SimpleTest {
    public void setUp() throws Exception {
        TestRouter tr = new TestRouter();
        ctx.addRoutes(tr);
    }

    @Test
    public void shouldCheck() throws Exception {
        ctx.createProducerTemplate().send("direct:in", getInOut("<a/>"));
    }


    class TestRouter extends RouteBuilder {

        public void configure() throws Exception {

            ((ProcessorType<ProcessorType>)from("direct:in")
            .setProperty("operation").xpath("local-name(/*)", String.class)
            .multicast(new MergeAggregationStrategy())
                .to("direct:a", "direct:b", "direct:c")
            .end()
            .setBody().simple("${in.body}"))
            .bean(RequestEnricher.class, "enrich")
            .to("log:pl.touk.debug");
            
            from("direct:a").process(new SimpleContentSetter("<aaaa/>"));
            from("direct:b").process(new SimpleContentSetter("<bbbb param1=\"1\" param2=\"2\" param3=\"3\"/>"));
            from("direct:c").process(new SimpleContentSetter("<cccc/>"));
        }
    }
}

What’s unusual in this code is the fact, that what normally Camel does when you write a piece of DSL like:

	.to("direct:a", "direct:b", "direct:c")

is pass input to service a, than a’s output gets passed to b, becomes it’s input, than b’s output becomes c’s input. The problem being, you loose the output from a and b, not mentioning that you might want to send the same input to all three services.

That’s where a little tool called multicast() comes in handy. It offers you the ability to aggregate the outputs of those services. You may even create an AggregationStrategy that will do it the way you like. Below class, MergeAggregationStrategy does exactly that kind of work - it joins outputs from all three services. A lot of info about proper use of AggregationStrategy-ies can be found in this post by Torsten Mielke.

public class MergeAggregationStrategy implements AggregationStrategy {

	public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
		if (oldExchange.isFailed()) {
			return oldExchange;
		}
		transformMessage(oldExchange.getIn(), newExchange.getIn());
		transformMessage(oldExchange.getOut(), newExchange.getOut());
		return newExchange;
	}
	
	private void transformMessage(Message oldM, Message newM) {
		String oldBody = oldM.getBody(String.class);
		String newBody = newM.getBody(String.class);
		newM.setBody(oldBody + newBody);
	}
	
}

However nice this may look (or not), what you’re left with is a mix of multiple XMLs. Normally this won’t do you much good. Better thing to do is to parse this output in some way. What we’re using for this is a Groovy :). Which is great for the task of parsing XML. A lot less verbose than ordinary Java.

Let’s assume a scenario, that the aggregated output, currently looking like this:

	
	
	

is to be processed with the following steps in mind:

  • use <aaaa/> as the result element
  • use attributes param1, param2, param3 from element <bbbb/> and add it to result element <aaaa/>
public class RequestEnricher {
	
	public String enrich(@Property(name = "operation") String operation, Exchange ex) {
		
		use(DOMCategory) {
			def dhl = new groovy.xml.Namespace("http://example.com/common/dhl/schema", 'dhl')
			def pc = new groovy.xml.Namespace("http://example.com/pc/types", 'pc')
			def doc = new XmlParser().parseText(ex.in.body)
			
			def pcRequest   = doc."aaaa"[0]
			
			["param1", "param2", "param3"].each() {
				def node = doc.'**'[("" + it)][0]
				if (node)
					pcRequest['@' + it] = node.text()
			}
			
			gNodeListToString([pcRequest])
		}
		
	}
	
	String gNodeListToString(list) {
		StringBuilder sb = new StringBuilder();
		list.each { listItem ->
			StringWriter sw = new StringWriter();
			new XmlNodePrinter(new PrintWriter(sw)).print(listItem)
			sb.append(sw.toString());
		}
		return sb.toString();
	}
	
}

What we’re doing here, especially the last line of enrich method is the conversion to String. There are some problems for Camel if we spit out Groovy objects. The rest is just some Groovy specific ways of manipulating XML. But looking into enrich method’s parameters, there is @Property annotation used, which binds the property assigned earlier in a router code to one of the arguments. That is really cool feature and there are more such annotations:

  • @XPath
  • @Header
  • @Headers and @Properties - gives whole maps of properties or headers

This pretty much concludes the subject :) Have fun, and if in doubt, leave a comment with your question!

Meetbsd 2010

Some time ago, I’ve attended MeetBSD conference in Kraków. This BSD event is held yearly in either Warsaw, or Kraków. Due to relatively small group of people that registered there was only one track, which had both good and bad sides - you didn’t have to choose from myriads of lectures, but there was no way to skip boring ones either. Well, I guess this kind of niche conference - about operating system :) - will not attract bigger attention.

DAY 1

It took place on 2nd-3rd of July, 2010, so this review is rather dated :) However, I’d like to keep this as reminder. I’ve arrived to the conference site, which was located in building of the Faculty of Mathematics and Computer Science a few minutes after the official start of the conference. I had been traveling from Warsaw the same day, and the only train that would not require me to get up at some night hour would arrive a bit too late. Oh well :)

I grabbed a tea and some biscuits and entered the series of lectures.

The first thing to listen to was a Welcome intro - quite nice one. Conducted by a guy from Cisco (AFAIK). He was talking about the opportunities for Kraków and how it will become a Polish Silicon Valley in near future, etc. Actually I don’t share his believes but the talk was ok.

Then came Dru Lavigne with some insight into BSD Certification program. Actually, does anybody use this? Come one. Do we really need another certification process? I for sure don’t see the need, especially for the BSD community. However the trend is good, may help popularize BSDs among enterprise leaders, because if something is certified, than it can be used in big enterprises, right? :)

Sławek Żak talked about NoSQL. Although the talk gave a bit of info about what the idea is and how does it compare to normal DBs, I did not find his presentation entertaining. In my opinion, there was not enough emphasis on the difference in usage for such databases. The talk about NoSQL I’d attended on Javarsovia was a lot better.

Next talk, presented by Attilio Rao was very, very technical. It was about “VFS/Vnode interface in FreeBSD”. It was rather an API presentation, and introduction on how to implement an FS in FreeBSD infrastructure, than a conference talk. This kind of presentation would be good suited for FreeBSD kernel developers not sysadmins.

Jakub Klama’s talk on the process of porting FreeBSD to Da Vinci embedded system was interesting. It had some photos of the board, tackled a few technical corners, but caught my attention. Well done!

Out guy among FreeBSD hackers - Paweł Jakub Dawidek - gave speech about HAST - High Availability STorage. In other words he implemented DRBD for FreeBSD. Sadly, for me this is just catching up with what Linux has in mainline since 2.6.33 (it was working very well even before that). It’s not so feature rich as DRBD, but the project is slowly maturing. Nevertheless, it’s good to finally have this on board.

Then an inconspicuous guy come onto the stage. Came from Bulgaria, named Nikolay Aleksandrov, that guy gave a talk titled Developing high speed FreeBSD. And the subject was astounding. He works for a major Bulgarian ISP and due to lack of cash to buy some serious networking gear, he wrote a FreeBSD extension that would sit in-between network adapter and the kernel and do all the hard work like routing, VLANs, and more. His goal was to make it lighting fast, and as far as his results showed, he succeeded. This talk was really amazing, he did what would normally take hundreds of thousands of dollars - in cash and skills - in his free time, or at least as a pet project.

DAY 2

Well, I’d skipped the first lecture of the day, because of laziness ;)

Had decide to pack myself and arrive to listen about what can freebsd borrow from AIX. Jan Srzednicki talked about some nice tools from the AIX world. He proposed that adding an educational, console-based tool for conducting basic (and even not so basic) tasks, would encourage people to learn the system. I think it would work. However the rest of his ideas weren’t good enough - at least not for me.

Next thing in line was The new USB stack. Interesting talk about new USB stack development, conducted by Hans Petter Selasky. This guy was really passionate about USB things ;-)

Martin Matuska presented his set of shell scripts that allow to create mfsBSD - an in-memory FreeBSD install. Since I’m already doing this kind of things with OpenBSD, the talk was entertaining.

Marcko Zec and Network stack virtualization. This was about extending FreeBSD to be able to create lots of compartmentalized environments with their own network stacks. As noted in the presentation: the solution still has problems with graceful shutdown of the stack. Still not stable enough - but very promising.

The closing presentation, given by Warner Losh (very knowledgeable guy behing bsdimp.blogspot.com) on the subject Using FreeBSD in a commercial settings. The talk was not what I’ve expected, but nevertheless was very interesting. It was about branching and merging back changes in case of using FreeBSD as a base for some commercial products. This could be easily applied to any other Open Source project. Warner described possible strategies for branching and performing merges, he noted also pros and cons of all the described solutions.

All in all, that was a fun time. Even thou I don’t use any BSD as my primary system at this time, and my BSD skills are a bit rusty, the talks were nice enough :) for a hobbist like me.

6+ Hour Layover in Stockholm

Last Monday, while coming home from Sweden, I had ~7h layover between my flights. Since Arlanda Airport does not offer a lot to do, at least not for me and not for 7h, I had decided to go to Stockholm. This post sums up a few nice places, that I’d like to visit on my next trip there.

I took Arlanda Express, fast train that takes you directly from the airport to Stockholm. On board, you travel with 200km/h in good enough conditions. The trains takes you near the Central station in 20 minutes, which is great! The main venue I planned to see was Gamla Stan. All the attractions were also there, on the island.

The first noteworthy place on the road was Christina’s costume salon. Unfortunately closed on the day of my visit, however since it was recommended to me, and the place look interesting I plan on paying it another visit. This shop offers a lot of costumes, different sizes, themes. Website is: Shop’s page

Seems like just a usual gift shop, but has this unique feeling, this Norsk spirit. Shop offers many fantastic gifts and souvenirs, but unfortunately they don’t come cheap. Nevertheless, worth visiting and perhaps buying. Especially nice were little metal statues of Viking gods. Handfaste - The Viking Shop

Neat little workshop, that does metal signs on request, from your design. Very good quality. Actually there are quite a few of such workshops nearby, but this may serve as a reminder. Det “lille” Skyltmakerie

Awesome sci-fi, fantasy bookshop. Lots of stuff, comics, books, dvds, collectibles, etc. I strolled into it when going back to the airport, that’s why I had only around 5 minutes to look around. But the shop is definitely worth visiting. They offer English books among Swedish-lang ones, with 50:50 proportions in quantity. Science Fiction Bokhandeln

moja fotka

The darkest and best supplied metal shop I’ve ever seen - thou haven’t seen many of such stores. Again, stumbled on it on my way to the airport, so had only around 5 minutes to look around, scan through t-shirts, etc :D Sound Pollution

moja fotka

Some t-shirt shop, had also lots of underwear, dark costumes, etc. I think there may be more to it than I noticed.

moja fotka

Unfortunately hadn’t found this during my wandering. Will try next time: Roberta Settels shoes designs

Don’t know the name of this packed with designer items place, but it was warm and cosy :) Prices were high and the utensils neat. Bore marks of “Swedish design school”.

moja fotka

Maps created with Google Maps Static API:

Nice thing to use! Now I’m searching for a way to may the maps dynamic on click :)

Comments

Så vackert är Stockholm i natt…

Generic Enum Converter for iBatis

My goal was to create a simple, extensible Enum converter that would work with iBatis. This seems like a trivial problem, but took me a while to find a proper solution.

There were some additional preconditions:

  • all given Enums are jaxb generated objects - but any standard Java Enum should work
  • conversion was 1-to-1, no special conditions and processing

The example Enum for this problem looks like this one (copy&paste from jaxb generated source):

@XmlType(name ="ServiceType") 
@XmlEnum
public enum ServiceType {

    @XmlEnumValue("stationary")
    STATIONARY("stationary"),
    @XmlEnumValue("mobile")
    MOBILE("mobile");
    private final String value;

    ServiceType(String v) {
        value = v;
    }

    public String value() {
        return value;
    }

    public static ServiceType fromValue(String v) {
        for (ServiceType c: ServiceType.values()) {
            if (c.value.equals(v)) {
                return c;
            }
        }
        throw new IllegalArgumentException(v);
    }

}

“No big deal”, you say. I beg to differ. What I wanted to achieve was a simple construction which would look like this when used for another Enum (CommonEnumTypeHandler is the name of my generic converter):

public class ServiceTypeHandler extends CommonEnumTypeHandler { }

Unfortunately due to the fact, that Java does not have reified generics, which is described in multiple places, I had to stick with passing through a Class type of my enum. So it looks like this:

public class ServiceTypeHandler extends CommonEnumTypeHandler {

    public ServiceTypeHandler() {
        super(ServiceType.class);
    }
}

My final class has to look like this one below:

import java.sql.SQLException;

import com.ibatis.sqlmap.client.extensions.ParameterSetter;
import com.ibatis.sqlmap.client.extensions.ResultGetter;
import com.ibatis.sqlmap.client.extensions.TypeHandlerCallback;

public abstract class CommonEnumTypeHandler implements TypeHandlerCallback {

    Class enumClass;

    public CommonEnumTypeHandler(Class clazz) {
        this.enumClass = clazz;
    }

    public void setParameter(ParameterSetter ps, Object o) throws SQLException {
        if (o.getClass().isAssignableFrom(enumClass)) {
            ps.setString(((T) o).name().toUpperCase());
        } else
            throw new SQLException("Excpected " + enumClass + " object than: " + o);
    }

    public Object getResult(ResultGetter rs) throws SQLException {
        Object o = valueOf(rs.getString());
        if (o == null)
            throw new SQLException("Unknown parameter type: " + rs.getString());
        return o;
    }

    public Object valueOf(String s) {
        return Enum.valueOf(enumClass, s.toUpperCase());
    }
}