Every Friday, Dylan’s class teacher chooses one child to be the Helper for the following Monday. The general idea is that the chosen child puts a favourite toy or object into the box and the class play a game of twenty questions/guess the object with clues given by the child. It’s one of those great little ideas to encourage children with the idea of presenting, asking and answering questions and sharing their interests.
Last time we had a dead snake in a bottle, courtesy of my brother in law’s partner, who found it in the hay in her horse’s yard.
This time, in keeping with the previous serpent theme, we’re bringing something biblical to the classroom!
I had a thought about opening their box and putting them directly in the helpers box so that when the teacher opens them, they get covered in flies!!! Flies!! Muwahahahahaha….
I love TED, those amazing and interesting people standing on stage presenting their cutting edge work in Technology, Entertainment and Design. The TED organisation’s slogan is “Ideas worth sharing” and their videos published almost daily form part of my daily video diet.
Although this week it appears that someone at TED thought one of their speakers’ videos was an idea not worth sharing. It’s Nick Hanauer’s short speech on the idea that cutting taxes for the rich does not lead to “job creation” but instead it is down to the consumers (read: middle class) who create jobs by creating demand. There’s been a lot of noise on the blogosphere that TED (an organisation whose participants are almost always wealthy, advantaged and well educated) are censoring debate on inequality.
There’s a copy of the video linked here if you want to see it:
Disclaimer: This is a technical article intended for software developers. It’s full of techno-waffle, so if coding isn’t your thing, please feel free to read the rest of my blog where I go into astronomy, technology, gadgets and general geekery. You have been warned!
As a freelance software developer, I get all kinds of requests. I spend most days in my home office from morning ‘till night designing and coding for Windows desktops although occasionally a tasty project comes my way and I have been known to code interfaces with a wide variety of hardware and sometimes even from and for a train, boat or a plane.
But I have never been asked to develop Windows software for non-Latin countries. Arabic-speaking Saudi Arabia. Compared to developing software for Latin-alphabet countries, Arabic presents a number of challenges. The characters don’t resemble anything I’ve ever seen and it’s read from right to left. Visual Studio data types can be easily adapted for Unicode strings (hint: don’t use UTF-8 or its ilk to store non Latin character sets) so the bulk of the task is getting all the various words and phrases migrated from English to the destination language, in this case Arabic.
In the old days, a Micro ISV wanting to translate one of their applications from one language to another would typically make a copy of the source and painstakingly step through it. It’s easier these days with modern development techniques and tools but the end result is the same, the same product localised. The trouble with Arabic-speaking countries is that not only do you have to translate text strings but their numerical values too. Oh, and did I mention that they’re all right to left and conceptually work their way backwards through forms too?
For this particular project, I’m creating a prototype of an existing Windows .NET application. A big application, one with over 400 form classes and well over 2,000 supporting classes. Clearly putting together a quick prototype without affecting the original solution is going to take some time without some form of automation.
After creating a new copy of the solution (we’re not localizing one solution dynamically as the localisation involves substantial business logic changes) we want to be in a position whereby all hard coded strings have been extracted to XML files which can then be read by a translator’s third party tool and then re-imported into the solution, a different set of XML files depending on which language you’re using. If you’re just wanting to localise the strings (e.g. from English to French or Spanish) then the process is largely the same but you’ll have to manage multiple language XML (.resx) files within a single solution and maintain the logic workflow for all regions on the same code base.
The ‘trick’ of the trade is to firstly set the “Localizable” attribute on all of the forms to “True”. This can be accomplished with a Visual Studio Macro to loop through the forms and force the Visual Studio designer to refresh itself. Visual Studio will automatically move all localisable hard coded strings from controls on a form’s class (e.g. MyForm.Designer.cs) to the form’s own resource file (e.g. MyForm.resx). If you have custom controls with custom string properties, remember to add the <Localisable> attribute to your properties first so that the designer knows to extract the hard coded strings to the resource file, too.
At this point we have our unmodified English version and our new “Arabic” version which has all of the form UI text in resource files but fundamentally it’s the same solution. The next step is to extract all of the hard coded strings from your forms themselves, placing the hard coded strings in the resource file and leaving a reference to the resource file string location in the form code.
This is the time consuming bit. Done by hand, you’ll be highlighting and cutting and copying and pasting a lot and will likely make errors. Whilst Visual Studio has a nice automatic procedure for the first step (so automatic that you may not even notice any change at all), there’s seemingly nothing built in to handle this simple refactoring job. There’s a Visual Studio addon that was written for Visual Studio 2008 and it’s called, “Resource Refactoring Tool”. It works with Visual Studio 2010 and gives you a right click context menu option called, “Extract to Resource”. Simply highlight the string, right click and then select “Extract to Resource”. It will be done for you, but you still have to wade through, in this case, 400 complex forms which can easily number in the tens of thousands of strings awaiting extract.
If I extracted one string every ten seconds, that’s looking like an entire week of effort and that’s without stopping to breath.
So I went looking for a tool that would loop through a given set of classes and extract strings from them into a resource file. I checked everywhere and found only one viable offering from a company known as Lingobit. Their flagship product, “Lingobit Localizer” claims to be a one stop shop for localizing software products and I dare say that it looks good. It’s also very expensive, costing about the same as Adobe Creative Suite 5. They’ve recently released a little product which doesn’t claim to do any translating but it does fit the gap for what is essentially a glorified text to XML parser. And did I mention that it’s £200?
Maybe calling it a glorified text parser is being a little unfair since one could say the same thing about Visual Studio, although a basic solution text parser that allows for filtering of results and some form of XML-compatible export routine is a big omission from Visual Studio and there aren’t any other addons or tools that provide this functionality. Both ReSharper and CodeRush come close with their refactoring tools but aren’t powerful enough to insert a new string resource and refactor more than a single line of code at a time.
Basically, Lingobit Extractor has a very simple interface. You create a new Extractor Project after which you load the solution (it works for many different programming languages and not just managed code – as I said earlier, it’s a text parser) and then write your filters. Filters are a comfortable method of searching for strings and you can have more than one filter per project. In fact you need more than one filter and the effects are cumulative. For instance, I wanted to exclude all SQL and reserved names from being extracted as this would break the solution and prevent it from compiling. Out of the box, this process is a little frustrating as the application should provide some basic existing filters depending on the type of solution loaded.
String filtering. It took a couple of hours to get all the various filters right, but it was time well spent as it reduced the amount of unwanted strings in the list prior to exporting to the resource file.
After you’ve done this, it’s a simple case of selecting the projects and classes that you wish to translate on the left navigation pane, extracting the strings to a temporary editable table and then, if you’re happy – exporting the results to a new or existing resource file. The naming of the strings is fully controllable as well as the filters being easy to use and flexible and no there are no Regular Expressions in sight, although they are supported if you want to use them.
Once your source files have been loaded and your filter has been configured correctly, you execute the parser and view the newly created resources. It’ll show you a preview of your source (top right), the newly named string references and their values (bottom right) and you can either save the referenced strings into an existing – or a new – resource file. Splendid. Just make sure that you’ve spent sufficient time at the string extraction stage to ensure that you’re not translating any say, SQL statements. I found that even some source code specific keywords ended up getting parsed which broke compilation, so be careful and check everything.
The next step is to send the .resx files over to your translator… most translator will accept them and those that can’t, say because they’re simply native language speakers without the tools for editing xml files, you can use a tool such as TransView which has a (free) viewer and a (paid) Visual Studio addon that parses through your projects, combines the resource strings into a single proprietary file ready for translation. Your translator then has a very simple job of filling in the boxes and it even includes tools for translating over the web (thanks to Google Translate) and auto-filling duplicates so you’re not translating “OK” for the thousandth time.
Anyhow, I quite liked LingoBit Extractor. It did the job for me but its very existence begs the question as to why these string extraction refactoring features aren’t available within the Visual Studio IDE and even aren’t included in the two main developer productivity tools, CodeRush or Resharper.
What I liked: It parses fast. If The Flash could parse files, he’d parse them this quickly. A solution with over 500 forms was parsed in seconds.
Annoyances: Selecting multiple resources and choosing to delete only deletes one, not all of the selected items. Some things like #Region aren’t supported, neither are the default values for optional parameters in method declarations. This means that you have to filter these sections out. The trouble is, the filter properties are stored in the tool’s project file and cannot be shared between projects which is a bit of a problem. The project file is an XML file so it’s not hard to make a ‘template’ by copying out the <Filter> elements if you need to copy them from one project to another. The support URL at the time of writing is throwing a HTTP 500 which wasn’t helpful. Also any reserved name strings like “Error”, “And”, “Date” will by default create escaped versions in the resource file (e.g. “_Error”) but the escaped version isn’t used in the refactored code. So it’s a pain going through and changing them all by hand afterwards. After a few hours, I noticed that all my .resx files had “TRIAL” written into the String. It was a bit of a problem.
If you can live with the annoyances and have lots of classes to parse then I highly recommend LingoBit Extractor and optionally, TransView.
Congratulations to everyone who has had their image published this month. There are a couple of fantastic Aurorae Borealis (Northern Lights) images – I’m hoping to get out some point later this year, somewhere further north to see the lights for myself.
The sun goes through a cycle of activity and this year we’re reaching a period of maximum solar activity. This means lots of mass ejections from the sun and these ejected particles are responsible for the northern lights. So if you’re far enough north (Scotland, Norway, Iceland) you stand a good chance of getting a light show this year!
The new iPad is finally here and it’s not called the iPad “3”, “HD” or any other moniker. For the purposes of the review, I’ll refer to the latest iPad as the “3”.
The headline feature is the new “Retina” display which while not sporting quite as high a pixel density as the iPhone 4/4S does (the iPhone 4/4S if scaled up would have more pixels than the iPad 3), Apple have quadrupled the number of pixels in the iPad’s capable display and Apple claim that it is better than before.
I have in my hands a white iPad 3 and I’m just about to find out…
All three generations of the iPad share the same solid construction with an aluminium body and a toughed glass screen. The buttons and ports are nearly identical although the iPad 2 and 3 both have front and rear facing cameras. The iPad 3 improves on the rear camera by offering the same camera unit available in the iPhone 4 (notably not the latest one from the 4S).
iPad 2 (left) and iPad 3 (right). The iPad 2 uses the camera from the iPod Touch and the iPad 3 uses the improved camera from the iPhone 4.
I rarely use the cameras on the iPad and expect to use them even less now that iOS 5’s “Photostream” feature shares camera roll images taken on the iPhone (or uploaded from a PC or Mac) across all devices.
The iPad 1 and 2 share the same screen although on these copies the iPad 1 screen is marginally superior in the dark as the iPad 2 exhibits a little backlight bleeding around the edges.
The iPad 3 shows no backlight bleeding but it gets warm after a few minutes of use on the left hand side.
None of these issues are dealbreakers and you’d probably have to be looking for faults to find them. In all three cases, there are no dead or stuck pixels and the screens render content beautifully.
Non-retina (iPad 2) vs. Retina comparison
On first glance, I saw absolutely no difference between the original and the new iPad screens. It took a fairly close inspection to begin to notice the improvements, mostly around the typeface of fonts. Clearly it was time for a closer inspection…
The first test was to get a microscope up to the screen to see what the pixels actually look like. And here they are, at approximately 40x magnification and the photo was taken using an iPhone 4S:
The pictures don’t do it justice. The new display really is packed with twice as many pixels and the gaps between the pixels are apparently smaller.
You don’t need a microscope to see the difference. Take a look at the following comparison shots between the iPad 2 (in black) and the iPad 3 (in white):
The difference becomes obvious when reading lots of text on screen. The non-retina display works very well using clever anti-aliasing techniques to give a readable page. The retina display gives a pin-prick perfectly sharp view across entire pages of content without needing to zoom in or employ clever mathematical trickery to prevent blocky text.
Most of the time, I don’t think the average Joe wouldn’t be able to tell the difference between a retina screen and a non-retina screen. The difference becomes even less pronounced when viewing images, in my tests I couldn’t confirm that the retina screen “popped” or was delivering higher saturation than the non-retina screen. Where the retina display really does bring benefits is being able to view large, complex websites without zooming in and in the creation and reading of documents and emails since you can do so now using a smaller view and tinier fonts. It’s nice.
The iPad 2 is still available from Apple and in their own words it is, “Still every bit as amazing”.
The latest iPad is amazing. It’s beautiful and it’s quick. It’s as quick as the iPad 2 is for all tasks and the extra oomph in the GPU allows it to play games at eyeball-rocking resolution even held mere inches from your face. That’s nice, but the original iPad and iPad 2 aren’t slouches and have some beautifully optimised games too. Some of the latest “Retina-enabled” games for the latest iPad are double the download size, so bear that in mind if you’re planning on playing lots of HD-busting games!
But what about 4G?
If you’re in the UK, like I am, the “4G”/LTE addon isn’t going to work here. Despite the fact that we’re all still using a decade-old 3G technology, there’s no 4G on the horizon for the UK. Whilst there are trials, the Ofcom sale of spectrum for licenses isn’t taking place until the summer of 2012 which means networks won’t be offering faster speeds until 2013 at the absolute earliest. And the iPad uses 700Mhz version and 2.1Ghz version, geared for American markets. The 700Mhz version will never work in the UK (that’s used for TV) and the Ofcom sale of spectrum is for.. wait for it… 1800Mhz. Yeah, so forget it!
Thanks to @steven_amani for assistance with fact checking and @jaffo for the alert on the article’s original broken formatting!
Just for fun I thought that I’d set my telescope up and instead of putting its time towards a project, I just pointed out from our own galaxy to see if I could spot other galaxies, far, far away.
You see, all the stars you see in the sky at night with your eyes are all in our galaxy – bar none. Almost all of the beautiful pictures of nebulae and interesting space scenes come from our own galaxy, the Milky Way, it’s all in here:
But of course, the galaxy that we find ourselves in isn’t alone. So looking up around 10/11pm to the East, you’ll see the constellation of Leo rising above the horizon. There are a lot of interesting things to see in Leo but I’m not interested in those objects for the moment.
Instead, I deliberately chose an area of the sky that is rather dull, just north of the unimaginatively named star system, “93 Leo”. It’s a double star (two suns orbiting each other) approximately mag 4.5. “Mag”, or “Magnitude” is a measure of the apparent brightness of an object and each point of magnitude is 100 times brighter (or dimmer) than the previous.
(93 Leo, at 4.5 mag is not visible to the naked eye from my location and has to be observed through strong filters to cut out the background noise).
I took six, fifteen minute exposures and combined them together, deducted flat, dark and bias calibration frames (to remove artefacts from the imaging process) and processed the image data and to my surprise I found several other distant and dim galaxies in the picture. I looked up all the galaxies in the area using Stellarium and have marked the location of the galaxies in the image:
NGC 3886: Mag 14 NGC 3875: Mag 15 NGC 3873: Mag 14 NGC 3861: Mag 14 NGC 3851: Mag 15 NGC 3845: Mag 15 NGC 3844: Mag 15 NGC 3840: Mag 14 NGC 3841: Mag 15 NGC 3842: Mag 13 (brightest) NGC 3837: Mag 14 NGC 3860: Mag 14
It’s also amazing that this little unassuming mark, denoting the presence of NGC 3842 was caused by photons (like little balls of energy) of light that have travelled 320,000,000 light years. That is, they’ve been travelling in the freezing cold and black of space, hitting nothing (not even a speck of dust) and fell through my telescope, filter and then onto the camera and their journey was recorded.
The fossil record shows reptiles starting to evolve around 300 million years ago. By the time of the first reptiles the light from this particular distant galaxy had already been travelling for 20 million years.
Here’s a chart of all the unique traffic to this blog from December 2011, over the holiday season and into 2012. It’s interesting to see traffic fall off dramatically in the days leading up to Christmas and New Year. Boxing Day was the highest day for visitors and surprisingly around 530 people logged onto www.mikewilson.cc on Christmas Day!* Excluding the lowest and highest point, the range is (max: 650, min: 375) which is almost exactly the same range of visits as I had back in October (but now with more overall unique visitors). The slight upward trend in traffic is encouraging:
*A day is considered +/- 12 hours from GMT. As a lot of traffic comes from the US and Canada, they may be reflected a day earlier or later in the chart.
My latest image was taken last month, on the 12th December 2011. It’s shows a view of the Rosette Nebula in Hydrogen Alpha (656nm +/- 7nm). It’s a careful ‘slice’ of the full colour (white) light spectrum that appears in the deep red potion of the spectrum.
This shows clouds of gas, excited by heat and radiation from nearby stars, giving off a red light. I’ve turned the image black and white as it shows up the structure better.
“The cluster and nebula lie at a distance of some 5,200 light-years from Earth (although estimates of the distance vary considerably, down to 4,900 light-years.) and measure roughly 130 light years in diameter. The radiation from the young stars excite the atoms in the nebula, causing them to emit radiation themselves producing the emission nebula we see. The mass of the nebula is estimated to be around 10,000 solar masses” – Wikipedia
I imaged this for the first time with exactly the same filter (using a 5” Newtonian telescope and a DSLR camera) back in March 2011.