Wednesday 9 November 2011

Upgrading Android Databases and ORMLite

I've long been an advocate of ORMLite, its easy to use, light weight and relatively feature full.
One of the biggest headaches I run into when creating Android application is how to nicely implement some sort of upgrade strategy for database migrations.
I require from the strategy the standard approach I would take when writing any and all application regardless of platform or language, usually taking the form of the steps below.
1) Make the change: Alter table, new column etc.
2) Deprecate the old: Remove it from my queries, domain objects, ORM objects, stop using it!
3) Next release, clean up: Old columns/tables removed so in the case a failure, the application can be rolled back successfully.

Android exposes a method which both ORMLite and the standard SQLLiteHelper classes exposes of public abstract void onUpgrade (SQLiteDatabase db, int oldVersion, int newVersion).


According to the docs:

Called when the database needs to be upgraded. The implementation should use this method to drop tables, add tables, or do anything else it needs to upgrade to the new schema version.
Parameters
dbThe database.
oldVersionThe old database version.
newVersionThe new database version.
This all seems okay but when you have lots of changes and when you use an ORM framework such as ORMLite it makes it a little more tricky. I am not completely happy with the way I have chosen to deal with migrations, however I feel it fits my purpose and allows me to abstract away some of the raw SQL. I originally started down a builder pattern route but soon realised that although it would fit my purpose I would be writing lots of code and gaining little in readability and maintainability. In adition to this managing a series static column name strings after you have removed the column and the plan to change my domain objects by adding an interface to allow easier migration doesn't seem correct. 
My first approach looked something like this:
Below is a sample utility class which I have created with the aim to try and easy the migration process. I ended up using a series of assets files which contain the migrations for each database version. A little helper classes (UpgradeHelper.java) simply loads the required sql files and keeps the data in a list, stripping out comments, then this is simply executed and if anything fails along the way, I bootstrap the database and re-source everything.
Below is how I have chosen to utilise the UpgradeHelper.java to maintain and load the required database migration files to be ran. This is showing the standard way using onUpgrade to determine if migrations need to be sourced, adding them to a list of migrations then executing them one by one.

Limitations mean that I am restricted to only being able to run SQL files during the migraiton process but improvements could be added to allow for DomainObjects to be loaded and ran. 
Future options/improvements would be: 
  • Include the ability to create/delete/alter tables based using a more fluid human readable syntax i.e. not just executing SQL but creating a interpreter which can be used in conjunction with ORMLite to source/alter domain objects.
  • There are probably more SQL comment types which can be stripped.
  • Abstract away some of the worst attributes of SQLite e.g. the fact you cannot easily perform alter tables without jumping through hoops.
In conclusion this is how I am at present dealing with database migrations inside Android, I'm sure its not the best solution to managing database migrations/upgrades but it fits my purpose and hopefully means in the future the task of adding new migrations is easier. 

Thoughts and comments are always welcome. I will try get a sample Android project with example upgrades and utility classes. 


Tuesday 1 November 2011

First Impression, Tracer Bullet Development

Recently myself and my colleague (@one_hp) have read Ship It - A Practicably Guide to Successful Software Projects and have been looking for a suitable project/chunk of work to trial Tracer Bullet Development, this can be found @ chapter 4, page 105. First impression of firing tracer bullets through your application may feel similar to creating throw away prototypes or simple proof of concept design and implementations but when you get into the swing of it the benefits start to shine through.


The remit of the work we started on was to create some sort of issue raising framework for team at my company. Currently this was done via Outlook, sending emails to various group inbox's which are triaged and worked on accordingly. 


From the business the requires are to replace this system with one which can be reported on easily, will allow money savings by not needing X number of Outlook licenses, not to increase the work load of people who raise issues and also to not greatly impact the current process for triaging and actioning issues which are raised in each of the corresponding teams.       


So after a few days speaking to the various teams, gathering some light weight/loose requirements and performing some analysis on the emails being sent back and forth containing issues and resolutions we got down and started to plan the work which was needed. 


Immediately we follow the usual design process which we tend to follow, get some initial designs down on paper (Created in Pencil, a great tool!), map out the existing work flow processes plus what we imagine the new one may look like. (via Gliffy) This usually may take a few attempts, especially after discussing with team mates/managers/users the various options available. 

Now we have are happy enough to start coding and evolving the idea as we work, we can start firing some bullets though our application and get proposed screen changes up and viewable so we can get some feedback.

So as I understand the idea behind tracer bullet development is to shoot bullets from one side of your application to another but not to spend any time on complex logic/processing, this can be added at a later stage once you are happy your accurate enough at shooting first. 

The application being changed consists of a very standard spring based web stack with Flex 4 for the front end. Standard controllers, services and repository layers with a mixture of JDBC and Hibernate for persistence. 

We took the screen designs and scribbles we made and created an initial schema for what we would think is needed. 

Creating some simple POJO's at this stage is trivial and with the help of eclipse no time needs to be wasted with POJO creation. Next we started adding some initial controller and service methods. These simply called through to the layer below, not performing any logic and in the case of a persistence method you could simply add the objects to a static map or log out the parameters for viewing if you require.We spent some time initially only writing the interfaces making sure we had good method names and correct signatures before proceeding. The devil is in the detail.

So we have a schema, domain objects and we can go from controller to repository without any processing/change, we should now be ready to cock the gun and fire our first bullet....bang...bang.

One of the good things about tracer bullet development as that our bullets consisted of single or lists of pojo's and instinctively we ended up made factory methods for creating these pojo's which is exactly what we tend to do when writing unit tests. Straight away it was highlighted that in reality the unit test can reuse these factory methods and should be in a perfect state to use once we are ready to finally implement the solution. We set out to use tracer bullet development to speed up the feedback process and to get some usable screens we can demonstrate a.s.a.p. Within a-few hours we had a complete end to end implementation which had no logic, persistence or complexity. 

No tests?

You may be wondering where all our tests are, at this point we have no tests and its certainly not test driven. However we have a very light weight and quickly constructed framework and we can concentrate on what would be the time consuming part of this particular task, GUI screen creation and feedback. This also doesn't mean that once the screens have been created and the users are pleased with them we would also not go down the test first route before adding any complex logic to any point to the process.  
 
GUI's

We begin creating our screens, making new components and the best thing about this is immediately we have as much canned data as we want and needed. No messing about with a DB if you find you have a missing field, bad name or wrong data type etc. If something is missing add/remove change a field on you POJO tweak your factory and fire up your instance again. We can easily test small and large amounts of data, obscure looking data and data we know would probably be impossible. Hopefully at the end of this will not only have a feature users are please with and happy to use but a GUI which we work under various scenarios even if obscure. Using our initial Pencil designs we created the screens, and started peppering them with bullets and within a day or so we had all the screens changed and ready to demo with virtually no implementation in place. We had certainly not finished the task but we are ready to get feedback and the users would be able to click about, simulate normal actions and see the outcome as if it was complete.  

Is it any different?

After spending a good 2-3 weeks getting this task to completion I can say I liked the way tracer bullets got us up and running fast and allowed the software to be demo early on even when nothing has really been done. 


We have had the inevitable business changes, feature creep and developer mistakes which have probably forced us to take longer than originally estimated for this task but in the end I am pleased with the software and hopefully on go live the users and business will be pleased with the outcome. 


Tracers seem to have made the process of user feedback a lot easier and simpler when it comes to testing and getting approval for GUI's. It aided us when we came to implementing the solution as virtually all code we had already written could be reused in tests and wasn't thrown away. We have ended up with hopefully a solution to a problem that the business require and a solution and GUI which the users are hopefully pleased with and are well aware of features due to constant feedback.


I would  recommend  using the tracer bullet methodology and recommend reading Ship It - A Practicably Guide to Successful Software Projects as its a great read. 


Any and all comments welcome, James





Saturday 15 October 2011

Copy Messages from one ActiveMQ instance to another with Apache Camel and Groovy

This is just a quick example showing how I have used a little groovy script (it doesn't need to be groovy) to quickly fix or transfer messages from one ActiveMQ queue to another. In the example below the script simple re-routes messages from one instance to another instance using Apache Camel. I have used this before to also fix errors in messages and re--process them before placing back on another or the same queue. It also can be used to quickly move failed messages from a failure queue to the live queue. Scripts like these are easier to use when the message contents are of a standard/open type, i.e. xml, json, raw text etc. This way you can easily parse, fix and modify message bodies without any complex logic or translation.

Saturday 24 September 2011

Using Git Diff

Having never performed a Git Diff on Tags I thought I'd give a simple demonstration of what to do. We currently use fab for tagging and deploying Git projects at work and all in all it seems to do it well, its the first deployment aid which I have seen whilst using Git and as we transition from sub version to Git it fills the gap until proper release and deployment scripts can be written.

Get the current history for the project you want to diff version.
git hist



Get the git tag commit hash's 

tag v10 = 8df952f
tag v11 = 1ed9472

Diff the versions:

git diff 8df952f..1ed9472

You how should be presented with the diff between those git commits.

Tuesday 20 September 2011

List Object Array Matcher

I've been dealing with lots of large batch object arrays recently and when writing a unit test I found there's is no built in Object array matcher as part of the current Hamcrest matchers package so I made a simple one earlier.

Wednesday 14 September 2011

Using iText to create QRCodes

Recently I have been playing around with iText trying to determine how to stamp a QRCode on to dynamically generated Pdfs.

Its simple to create a QRCode using iTexts built in BarcodeQRCode .class but stamping this on to a PDF is alittle bit unusual and I am unable to find an alternative as it stands.

First you must either create a button in code or place the button on your template which is being stamped, I've called mine barcode_button and I placed this on my template in the correct position and size.

Next you must get hold of your button, create your QRCode and the set the button image to be you QRCode you created, then finally replace the button on the PDF with the new one.
When you finalise the document and close it you will end up with a nice QRCode where the button once sat.


Further examples of creating QRCodes with iText can be found @ http://itextpdf.com/examples/iia.php?id=156

Sunday 11 September 2011

Thoughts and Ramblings

I've been busy pretty much every weekend for the last 6 weeks, either visiting friends, family or knee deep in jobs around the flat. Unfortunately this has left me no time for coding which hurts, but on the bright side, over the last few weeks due to the journey time when travelling around England it has allowed me to catch up on some good reading. I have had a reading list either on-line or on paper for some time and this time has allowed me to nail through some good books and hopefully in-turn allowed me to try some new ideas and improve my overall agile practices. My reading list is available on git-hub @ https://github.com/jamesemorgan/reading-list

Agile Software Development, Principles, Patterns, and Practices


Blog: http://cleancoder.posterous.com/
Author: Robert C. Martin


This a great book in general. I've read it in bits over the last few years and after reading several of the chapters again it is full of useful and well documented code examples. I especially like the chapter on patterns, it demonstrate some of the classic and not so often used patterns demonstrating how and when to not use and use them. I've also read some other work by Uncle bob, Clean Code: A Handbook of Agile Software Craftsmanship , which is another great read and one I would recommend to other software engineers.

 

 Practices Of an Agile Developer

Web: http://pragprog.com/book/pad/practices-of-an-agile-developer
Authors:



Ship It

 

Web: http://pragprog.com/book/prj/ship-it
Authors:


Thursday 4 August 2011

AspectJ and Custom Logging Annotation

Ive been using AspectJ for a while now and find its got some really cool usages as well as your standard ones like Transaction Management etc. One of the projects Ive brecently been working on has a very ridged process going through the motions in a very structured and standardised way. Originally when I created the little application I always had a log statement at the beginning of the method simply stating its purpose and incoming method args. As this would be a simple JAR application deployed and run via a main method, this sort of logging is good to determine run times and at what point the application gets to. I had already used AOP in the project to profile and time some of the very long running operations which is when I thought of nicely hooking in AspectJ and method logging together.

What I have created is a simple custom logging annotation which when applied to a method on a spring managed bean will log out at the given level and print the method name and args invoked with.


Ive created a sample/demonstration project over on my GitHub which can found here. To see the loogin simple run the main method inside the demonstration project form GitHub.

Here is a usage demonstration of how an application can utilise the Logging annotation.



Simply annotate your method and when it gets invoke you will having logging such as this.

13:27:48.560 [main] DEBUG c.morgan.design.demo.LoggingExample - Method Invocation=[logMe] - Args=[]
13:27:48.561 [main] INFO  c.morgan.design.demo.LoggingExample - Method Invocation=[logMeInfoWithMethodArgs] - Args=[abcdefg, 1234]
13:27:48.561 [main] WARN  c.morgan.design.demo.LoggingExample - Method Invocation=[logMeWarningWithMethodArgs] - Args=[abcdefg, 1234, true]
13:27:48.573 [main] DEBUG c.morgan.design.demo.LoggingExample - Method Invocation=[logMeDebugWithMethodArgs] - Args=[abcdefg, 1234, false, 0.2]

This is only a sample project, If this was to be used in a more production ready application I'm pretty sure I would pick a better name. :)

This is how the demonstration Logging Aspect looks:

I've chosen to take an annotation based approach but this could also be done using XML configuration.



Enhancements to make would be:

* Instead of having three different methods for the various logging level, have only one and retrieve the annotation from the method and determine its logging level from the annotation value.
* At the moment this only supports log-back and log4j, this could be made more abstract and determine the logging libraries for each application.
* Make it in to a library project in which you could simple include the library and configure it. i.e. only log for particular methods, packages etc.
* Allow for custom logging pattern configuration.

Anyway, its a small simple sample of the power of AspectJ, all comments welcome as usual.

Agile @ 10 - Some interesting posts

Here are two posts which are very interesting about Agile's 10th anniversary from two of people who have experienced it all. Its interesting for me since I've only been doing it for 3 years and I see changes happening, what will be next.

Ten Years Of Agile: An Interview with Robert C. "Uncle Bob" Martin

Laurent Bossavit: Agile Ten Years On

Monday 1 August 2011

Issues Using Maven 2 Assembly Plugin & Spring

Friday evening I tried in vain to package up my deployable Spring JAR file using the standard maven assembly plugin using the given configuration as seen below with no luck. I kept getting hit by spring throwing exceptions when trying to run the JAR file as the spring.shemas and spring.handler files where incorrect. It seems that when maven builds the package, it keeps overwritting the shemas and handlers files and by the end of the build they are both missing elements.

Exception in thread "main" org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 8 in XML document from class path resource [spring/spring-dataCleansing.xml] is
invalid; nested exception is org.xml.sax.SAXParseException: cvc-elt.1: Cannot find the declaration of element 'beans'.


	maven-assembly-plugin
	
		
			jar-with-dependencies
		
			
				
					com.some.package.Main
				
			
			
			
				true
					
						
							META-INF/spring.handlers
								META-INF/spring.schemas
							
						
					
				
			
			
				
					make-assembly
					package
					
						attached
					
				
			
		

It turns out this is a known bug and quite well documented. See: http://jira.codehaus.org/browse/MASSEMBLY-360.
So I turned my attention to maven shade plugin to build and package my JAR file, this only worked straight away.

My final maven shade plugin configuration is as follows:


            org.apache.maven.plugins
            maven-shade-plugin
            1.4
            
                
                    package
                    
                        shade
                    
                    
                        
                            
                                com.some.package.Main
                            
                            
                                META-INF/spring.handlers
                            
                            
                                META-INF/spring.schemas
                            
                        
			
				
				*:*
					
						META-INF/*.SF
						META-INF/*.DSA
						META-INF/*.RSA
					
				
			
                    
                
            
        


Using org.apache.maven.plugins.shade.resource.AppendingTransformer allows for the shade plugin to merge all spring.schemas and spring.handlers files into a single entry for each. I use the filters section to exclude particular files which would cause and Security exception to be thrown. The files are from javax.mail package.

It produces three jar files once built:
  • original-ProjectName.jar - The package application without dependencies.
  • ProjectName-shade.jar - The none shaded application with dependencies.
  • ProjectName.jar - The shaded application with dependencies. (i.e. a renamed package)

Monday 25 July 2011

Bypassing Proxies by setting Maven POM properties

After running into major issues this morning when trying to test a 3rd party SOAP based webservice integration with a "Connection Refused" exception being thrown I thought I'd share the issue and solution.

Recently a new proxy server has been setup with all back-office network traffic now going through it, this was stopping my local sandbox from hitting the required endpoint.

In order to ressolve this you can simply add some addiitonal system properties to your pom.xml file so Java can bybass the internal proxy. Replace {port} and {server} with your proxy settings and it should work straight away.

See: http://info4tech.wordpress.com/2007/05/04/java-http-proxy-settings/ for further settings if required.


	
		http.proxyPort
		{port}
	
	
		http.proxyHost
		{server}
	

Saturday 9 July 2011

ORMLite and Android a good Companion

Persistence with Android

When creating Android applications there are many ways for you to store data, each fit for its own purpose. Below is a quick break down of the 5 main and recommended ways for applications to persist data with Android. More information can be found on the developers guide
  • Shared Preferences - Ideal for application settings and small chucks of data which can be used globally within your application. Options include booleans, floats, ints, longs, and strings all which are stored as key-value pairs.
  • Internal Storage Ideal for storing application cache data, this medium is private to your application.
  • External Storage - Store public data on either removable storage media (such as an SD card) or an internal (non-removable) storage. Used for storing pictures, ring tones, music etc. 
  • SQLite Databases - Store structured data in a private database. The database is private to the application and consists of standard CRUD options.
  • Network Connection - Store data on the web with your own network server. Can be used for all purposes, will be familiarly to most Java developers, must have an available data connection for this to work. Classes with the following packages can be used, java.net.* or android.net.*
During the last 6 months I have been learning Android and slowly going though options available to me for each application or feature I create/add. I have experience using all methods for data persistence except External Storage which I have not come in to the situation where this would be the most appropriate method for the applications I have been creating. One point to emphasise is that when storing data for your application ensure you make the correct choice when choosing a persistence medium. I myself have chosen incorrect methods and tried to make it work for the wrong purpose, this will only end in dirty unmanageable code and an application which is hard to enhance at a later date.

As part of this guide I'm going to demonstrate how ORMLite can be used to aid and ease the development of Android applications and make storing data much easier than interfacing directly with Androids inbuilt SqlLite database.
About a month ago I stumbled across a object relationship mapping (ORM) library which supports Android. After a little investigation, porting an existing application and using it from scratch for a new application I am currently developing I deem ORMLite and Android to be a great companionship. As part of this post I will try and show you how easy it can be to use ORMLite for Android, plus demonstrating how I use the library and how it has naturally fit in with my current development practices and patterns.


Alternatives
There are afew alternatives available to using ORMLite two of which can be seen below. The other two projects are reasonably new and since the Android ORM space is still in early days I'm sure new libraries will emerge. I prefer the syntax when using ORMLite plus find it more intuitive to use and develop with. I also like the fact ORMLite is a multi functional library which supports several databases.

I am yet to do any performance based analysis on either ORMLite or any alternatives. At a later date I may re-create the example from this post in all three and try come up with some simple performance based analysis on the three ORM libraries mentioned.
Example Project
For the purpose of this example I will be creating a small Android Application which contains a one-to-many relational table structure which you will be able to view, add, edit and delete entries. All source files can be found at my Github account here.
The relationship consists of one PERSON has many APP's.
Setting up your project
1) Simply create a new Android project with the standard eclipse wizard. I've called mine DemoORMLite with the main activity called DemoORMLiteActivity.

2) Next download the packages ormlite-android-4.23.jar and ormlite-core-4.23.jar and add these to the classpath of your Android project. If your using Maven these packages appear to be accessible so add them to your pom as normal. You are now ready to use ORMLite.
Creating your domain classes
Below are the two domain classes I have created, Person.class and App.class. As ORMLite is annotation driven you simply annotate your classes with the required options, setting field constraints as required. Next annotate your class defining the table name. In this example I have followed the standard bean convention having private fields and public getter and setter methods, this is optional.

@DatabaseTable(tableName = "persons")
public class Person {

	@DatabaseField(generatedId = true)
	private int id;

	@DatabaseField(canBeNull = true)
	private String name;

	@ForeignCollectionField
	private ForeignCollection apps;

       // Getters and setters are also present but not show in this demo

	public Person() {
		// all persisted classes must define a no-arg constructor with at least package visibility
	}
}

@DatabaseTable(tableName = "apps")
public class App {

	@DatabaseField(generatedId = true)
	private int id;

	@DatabaseField(canBeNull = true)
	private String name;

	@DatabaseField(foreign = true, foreignAutoRefresh = true, columnName = "person_id")
	private Person person;

       // Getters and setters are also present but not show in this demo

	public App() {
		// all persisted classes must define a no-arg constructor with at least package visibility
	}
}

Creating your Database Helper
You will need to create a DatabaseHelper to allow ORMLite to create and manage your database. You can extend OrmLiteSqliteOpenHelper.class which will allow ORMLite to take over management. Simply create your table structure using helper methods such as the ones highlighted below. I also use this as an opportunity to expose my Dao's.

	// Should be called inside the onCreate method
	TableUtils.createTable(connectionSource, App.class);
	TableUtils.createTable(connectionSource, Person.class);

	// Should be called inside the onUpgrade method
	TableUtils.dropTable(connectionSource, App.class, true);
	TableUtils.dropTable(connectionSource, Person.class, true);

	// Exposing the Dao for App CRUD operations
	public Dao getAppDao() throws SQLException {
		if (this.appDao == null) {
			this.appDao = getDao(App.class);
		}
		return this.appDao;
	}

Creating your repository
The repository layer wraps all ORMLite calls hiding any exceptions and code relating to all database operations. This means when I use this in my Android Activities I will not have to deal with try/catch exceptions and can easily standardise what happens on these situations. Adding a further layer to your code may not always be suitable especially when using resource limited devices such as Mobile phones but I don't expect the impact to out way the code cleanliness/abstraction and ease of development when accessing a database via this means.


	// An example of a delete method on my repository
	public void deletePerson(final Person person) {
		try {
			final ForeignCollection apps = person.getApps();
			for (final App app : apps) {
				this.appDao.delete(app);
			}
			this.personDao.delete(person);
		}
		catch (final SQLException e) {
			e.printStackTrace();
		}
	}

Using with your Activity
Your activity will need to extend an ORMLite activity wrapper to enable your application to use the library (See below for a further explanation). Once you have extended the correct activity it can be treated exactly as the original parent class. Below is a sample of the DemoORMLiteActivity class onCreate method I created and how I use the repository mentioned before.

@Override
public void onCreate(final Bundle savedInstanceState) {
	super.onCreate(savedInstanceState);
	setContentView(R.layout.main);

	// Create a new repo for use inside this activity
	this.demoRepository = new DemoRepository(getHelper());

	// Get all people, I use this to then create a simple list view and array adaptor.
	this.persons = this.demoRepository.getPersons();

	// Additional Android specific code in here
}

Why must I extend another Class?
Coming from a main enterprise Java background I'm used to simply using @Autowired or @Inject to obtain my Services and DAO's and letting my container deal with finding and setting the required fields etc. So the idea of extending a class just to be able to access my database layer is something I don't see too often but it isn't doing anything complicated and simply exposing ORMLite to the most common Android components. Check out the sourcecode for the four base classes (OrmLiteBaseActivityOrmLiteBaseListActivityOrmLiteBaseServiceOrmLiteBaseTabActivity) and you will find all set up and exposeure of a DAO layer which you can use inside your main Android Classes plus it will deal with opening and closing of your DB connections. I am yet to run into a scenario where the base classes don't fit my need, and if so you can always create you own.
Conclusion 
I have not gone in to great depth in this demo but simply tried to show how easy it can be to use ORMLIte with Android. Previously using Androids inbuilt DB classes has proven to me to time consuming, clunky and error prone, plus the code is never as clean as when using this library. All sources can be grabbed from my GitHub account here, the example application is a simple demonstration of some CRUD options inside a very simple Android application. In total the application took no more than 1hour to create. When using the application long press each row to get a list of options available to each entry and single click each row to get a list of applications each person has. 


All thoughts, comments and opinions are always welcome. 

James

Tuesday 28 June 2011

Latest Ramblings

After getting back from am amazing holiday travelling around Brazil I've been yearning to get stuck in to some code after a good 3 three weeks plus off.

I've finally got round to starting my latest Android application, and have been investigating some various methods and libraries for use in my persistence layer.

One of the libraries I've been looking into has been ORMLite. A simple ORM library which I don't think is specifically built for Android but has good support. It tries to follow the KISS principle which I believe will fit nicely in to the resource limited scope which many Android supported devices have. Previously I have ending up making simple wrapper classes which I have used as a form of object to DB mapping but then in the end still actually writing my own SQLLite queries and handling connect scope etc. After trialling it this weekend and creating a few simple relational classes it seems to fit my purpose well. It doesn't have all the bells and whistles of a big ORM library such as Hibernate but it makes the creation and query of my SQLLite tables much easier.

One good thing about using a ORM technology is that when using it, it quickly developed itself it to a very familiar structure which I have seen in many projects forming a simple Repository layer. Coming from a enterprise background, usually working on large scale J2EE, JDBC and/or Hibernate based projects, the class structure and layout all feels good and usable, a problem which I have had when directly interfacing with SQLLite via the inbuilt Android libraries. After a few more weekends hacking away I'll aim to get a simple demonstration of ORMLite's capabilities and show how I use it plus any issues and aids I have develop when using the library.

Saturday 28 May 2011

To Scrum or Not To Scrum?

The company where I work we have always practiced a version of Scrum and tried to be as agile as possible, all being resource, time and money prohibiting. Everything I mention in this post I am or have been guilty of at some point. 

Friday I hosted a retrospective with the rest of the development team like we have many times before and tried to come up with some areas of improvement as well as bringing all issue to the forefront, plus a bit of venting from everyone. Some of the main points that had been raised are points below:
  • Last minute requirements from the business forcing late code changes and rushed releases.
  • Lacking of a product owner/area specialist who was willing to take responsibility for the product being released. 
  • Scope to woolly and acceptance criteria being defined by development post implementation.
  • A constant small stream of tasks which had been deemed maintenance tasks but had really been small features or tweaks to existing code bases of products.
After some debate and some insightful questions the conclusion all boiled down to development work starting to early. This isn't the first time and won’t be the last time that this conclusion will be drawn. Developers weren't strong enough to kick-back tasks at the business when no requirements had been gathered, and the scope so woolly it left it to the imagination of the developers. Time wasted in meetings post implementation trying to define A/C and testable characteristics, basically constantly chasing our tails trying to fill in the gaps we had previously missed out. 

To Scrum or Not To Scrum

This leads me on the next discussion we had: Why do we practice Scrum?  Do we actually practice Scrum anymore?  What alternatives are there? Ever since I joined the team we have been practising a form of Scrum within the development team and things worked well. Tasks would been defined early enough, 90% of the time we stuck to a standard fortnightly release cycle, most of the time we have Product Owners,  most of the time we have Stakeholders, a Scrum Master, try to release every sprint, the business would define the next sprints work and we would estimate and plan. All had been working well (with the minor hic-up) up until the last few months when development changed.

The business started making decisions faster, being more volatile and reactive, this could be due to the financial/industry pressures within the company and or a greater focus on pushing through new products and services which obviously will hopefully be paying my wages in the future. As the business had become more reactive and proactive in the current market climate the development team had not really thought about how the changes will affect the team and development process. We did not have the mind set to deal with the new changes especially the speed and or the volume. We still are thinking in a very iterative and almost waterfall model (it hurts to say it), letting each specialism within the team take responsibility for that function only, i.e. developers just being developers, testers just testing and leaving our business analyst to just be doing the analysis. This is ineffective and we all needed a little reality check!! WE ARE AGILE, we are proud to AGILE and yet we had failed with one of its most fundamental features. I don't think this had been done intentionally and I think this slip up is a recent problem and not one which had been causing major problems previously. So after much more discussion we came to the conclusion that Scrum isn’t really what we practice and if it is, it is being bent and squeezed to fit our current development cycles and current needs. 

I think one of the biggest problems had been the team struggling with the change and the constant rush of new feature requests which had been splitting the team up (and focus), each pair working on varying different tasks but yet the sprint still being classed and termed like it was a standard sprint.
We were not doing the project work originally planned out and the project work we had been doing was falling way behind due to features and changes which the business had been making at a much greater rate. Our team resource had been reduced and not only are we working on a single large several week project but also on many small tasks on different platforms, different projects and other smaller projects being thrown in a sprint half way through. We had not planned for this, our code base is all messed up, and releases had failed. Larger features being coded against the branch as the trunk is un-releasable due to the original large scale project we had estimated would take several releases. We had been doing project based work for some time now, focusing on larger scale projects defined from the business, incrementally pushing out new features with an overall common goal which had been loosely defined. New features/products were now being elbowed into this and simply placed under the same umbrella of our main goal, even though the goal posts had moved. Trying to please all and trying to be as agile as possible we had simply accepted these changes and not taken into account how it would affect the way we had been happily working for some time. Scrum and ourselves had failed to manage the current working conditions, the business at present did not really need a team who used Scrum, it was too ridged and we had been practicing a very fluid ad-hoc style and it doesn’t fit anymore. 

What’s next?

Kanban or Scrum-ban had been raised as a possible alternative to Scrum and is possibly a much better alternative way for us to manage the ever changing needs of the business. Scrum-ban sounds like it would be a great place to start, “Scrum-ban is especially suited for maintenance projects or (system) projects with frequent and unexpected user stories or programming errors. In such cases the time-limited sprints of the Scrum model are of no appreciable use” – wiki. Making the move from Scrum to pure Kanban may not have the same benefit and the change may be too great and too quick for the first step. Changing from Scrum to Scrum-ban hopefully will be beneficial to the development team and if all fails and we revert to the old pure Scrum methodologies then we only land back at where we are now. We need to ensure we retrospectively follow up each change. Making adjustments if required will hopefully mean the move will be a successful one.

As development we have work to do, improving communication and being more agile all around would make our reactivity much better and reducing the problems mentioned earlier. We had improvements to make in our source control management, having three streams not just a trunk and branch, but also a development branch which would improve flexibility of our code base and not render it redundant if features have been half finished or stories had been dropped from the sprint. Team functions need to be less isolated, developers can help out if testers are behind and testers can pair with developers on particular tasks to ensure test coverage is complete.

All in all I looking forward to practicing in a slightly alternative management style and it may just be the making of our team. I’ll report back on the changes made, how they fit in to our development team and the successfulness. Comments and feedback always welcome. 

James.

Sunday 15 May 2011

Groovy, Apache Batik and a little syntactical sugar speeds up Android development

As I read more and more of Android in Actionand further delve into the world of mobile development, one of the newer concepts I have come across is how to scale android applications to various different sizes of screen, hardware and resolutions.Google have good documentation in this subject area and have an entire section of the developer portal dedicated to this but this doesn't mean that trying to ad hear to them isn't a time consuming activity.

The problem I have been having is as I further try to improve the usability and functionality of my current Android applications in development, and try to follow the guidelines laid out by Google for simple things such as various different icon sizes depending on different screen resolutions, this process is eating all my time!

Table 1. Summary of finished icon dimensions for each of the three generalized screen densities, by icon type.
Icon TypeStandard Asset Sizes (in Pixels), for Generalized Screen Densities

Low density screen (ldpi) Medium density screen (mdpi) High density screen (hdpi)
Launcher 36 x 36 px 48 x 48 px 72 x 72 px
Menu 36 x 36 px 48 x 48 px 72 x 72 px
Status Bar (Android 2.3 and later) 12w x 19h px
(preferred, width may vary)
16w x 25h px
(preferred, width may vary)
24w x 38h px
(preferred, width may vary)
Status Bar (Android 2.2 and below) 19 x 19 px 25 x 25 px 38 x 38 px
Tab 24 x 24 px 32 x 32 px 48 x 48 px
Dialog 24 x 24 px 32 x 32 px 48 x 48 px
List View 24 x 24 px 32 x 32 px 48 x 48 px

I aim for all the icons I use to be in SVG format simply because they scale well and and can be easily converted to the desired format, PNG in this case. I use Gimp for windows to convert all the SVG files I have to the various different PNG's. This process is very manual and very time consuming which got me thinking......

After a little Google I stumbled across an Apache project for XML image manipulation called Batik. It seemed vast and very comprehensive but thought I'd give it ago.



I created a simple Groovy script which wraps Batik, exposing some Android orientated functionality cutting down the time to convert images from minutes to seconds, and when you start trying to convert larger numbers of SVG's to PNG in various sizes with the same name outputting them in different folders it can take hours.


I simply define various required image size groups which hides the mess and allows me to simply say I want this SVG converted to the required ANDROID_TAB dimensions.



Below is what I ended up with.


class SvgImageCreator {

 public static void main(String[] args) {
  new SvgImageCreator().convert(SizeGroups.ANDROID_DIALOG, [
   "menu_bug",
   "menu_clear_stats",
   "menu_credits",
   "menu_email",
   "menu_improvement",
   "menu_play_again",
   "menu_settings",
   "menu_manage_groups",
   "menu_add_group"
  ])
 }

 def convert(def sizes, def filesToConvert){
  filesToConvert.each{ file ->
   sizes.eachWithIndex { i, index ->
    println """Converting ${file} to W:${i[0]} H:${i[1]} for folder: ${SizeGroups.FOLDERS[index]}"""
    convertToPng(file, SizeGroups.FOLDERS[index], i[0], i[1])
   }
  }
 }

 def convertToPng(def name, def folder, def width, def height){
  final SVGConverter svgConverter = new SVGConverter();
  svgConverter.setSources(["images/svg/${name}.svg"]as String[]);
  svgConverter.setHeight(height);
  svgConverter.setWidth(width);
  svgConverter.setDst(new File("images/${folder}/${name}.png"));
  svgConverter.execute();
 }
}


At present it consists of a simple Groovy script with a runnable main method. With the aid of some Groovy syntactical sugar making this script easy to read and easy to extend if needs be. I created a static helper class containing definitions of all the various sizes and folder names. All it does is simple loop round the given SVG's and converts each one to the three different sizes specified and places these PNG's in various different folders depending on size. I imagine as a new app features get created and thus maybe further graphics are required you could simply add your new conversion definition and job done. 

After thinking about this over night, you could extend this to be quite intuitive, creating an eclipse plug-in which scans chosen folders outputting images of various sizes depending on file naming conventions. You could also have this as a standalone runnable jar which you booted up when you started development and performed the required actions after looking at a definition files in which would layout the conversions to and from. Obviously it doesn't do that at the moment and to be honest it has solved my problem saying me vast amounts of time as it stands so hopefully someone else may also benefit from it.

Please feel free to check out my GitHub where all the code is available: https://github.com/jamesemorgan/AndroidSvgToPngConverter. As ever comments are always welcome.

To get the project running simple clone the above project and add the missing dependency on Batik by downloading the source files from http://xmlgraphics.apache.org/batik/Groovy Eclipse plugin installed you should be fine.  

An example of the above images used in and Android menu can be seen below: