It's pretty obvious that conveying a sense of location and distance is key to a travel blog. My travel blog is simply the default standard list of posts, which conveys little of the place and route. And that is frustrating: when I look at my blog as an outsider might, I realize how the sense of the journey is muddied and indistinct.
Casting about for small projects to take on, I decided to fix this. Here are a bunch of ideas that came to mind to make it easier for readers to get a sense of each post post's context, what came before and what came afterwards:
Using Categories and Tags consistently to identify all the places visited. Then produce index pages for each of the places to quickly look up posts related to that place
Writing up an itinerary page which puts all the posts in a time-and-location context
Having each post cary some identifiers concerning which locations they reference
Research
Before trying to solve the problem myself, I did a bit of research on existing solutions. This is a safe place, so I can admit that I was secretly hoping none of the existing solutions would work, that I'd "have" to do it myself. This is a sickness among engineers.
This plugin helps you create a simple travel map to display on your blog. The map is using the Google Geocharts API. Markers are placed on all locations found in your posts (inside a custom field of your choice). Clicking a marker links to the post associated with that specific location. Hovering over a marker opens a popup with the title and a thumbnail (if the post at that location has one).
Javascript library for display KML files (Google Earth). Seems like it doesn't do the collection piece of getting data from the blog; you need to generate the KML files elsewhere.
With Map My Posts, you can easily create maps plotting the location of your posts, based upon your existing > tags or categories.
▪ 3 map types available: Google Maps, Google Static Maps (PNG image), and Geochart Visualizations.
▪ Embed maps on any page using the shortcodes [mmp-map], [mmp-staticmap], and [mmp-geochart]
▪ Associate with a country or any specific map location.
▪ Map My Posts uses existing category or tag names to help define country associations.
▪ Widgets available to display maps in the sidebar.
▪ Full control over size, colors, and click functions.
▪ Perfect for travel bloggers, touring musicians, or anyone else that wants to Map My Posts!
The reviews give kudos for the tag-> location mapping UI, but taking off a lot of points for flexibility.
When editing a post or page, you will be able to set a physical location for that post and easily embed a
Google map into your post. You can select the location by:
1. Clicking on the map of the world to position the point.
2. Searching for a location, town, city or address.
3. Entering the latitude and longitude.
The WP Geo location selector is styled to fit seamlessly into the latest version of the WordPress admin.
More information can be found at http://www.wpgeo.com/.
The way the Tripply handles locations referenced in an article is great: a superscript followed identifying a marker on a map rendered in the margins.
Summary
There are a bunch of solutions for adding more geographical information to a post. The common solution in the WordPress community is to keep location data alongside the post using tags or categories and to generate these by hand, as part of the posts' creation. This involves some kind of manual entry -- either by adding categories or via shortcodes. I'd much rather work on an automated solution than a manual one.
This article offers another option: use photo GPS metadata to determine the location related to a post. This is super attractive because it's automatic: I don't have to go back and add location data for all the posts I've already written.
Not knowing about the innards of WordPress, I didn't want to do extract the GPS in the WordPress upload path and store it in the WordPress database as the article shows. And though I'm sure that there much be ways to batch-process previous posts using a similar technique, they aren't obvious to me in my ignorance. Perhaps I'll save that for a future refinement.
What I can wasily do is write something on the client side. Using my previous experience with the WordPress client API and my new increasing familiarity with Ruby I decided to opt for a client-side approach. This easily handles the previous-post issue, but it's less elegant for adding location information to future posts.
Finally, essentially every solution uses Google Maps API -- so too shall I.
Solution
The design I chose is straightforward:
Enumerate all the posts, and then all the images in each post
Extract the location data for each image
Produce a database mapping of post to location(s)
Render the database as a table of contents map allowing one to quickly see how many posts come from where and rapidly see a list from each location
(optionally) populate category metadata for each post with the location information thus mined, allowing whatever WordPress plugin to embed location information within each post
Gotchas and Workarounds
The first implementation of the algorithm above was simple and relatively short, it's in the GitHub repro as backtracker.rb. Sadly, it sucked: only about 20% of the posts came back with any location information at all. Why would this be?
After a lot of investigation, I figured out a couple of causes:
During the second portion of the trip, I refined my Lightroom workflow to tag and export pictures for each post. Unfortunately, there's a small checkbox which lets you select whether to strip the GPS data from the files, presumably for privacy reasons. This option was selected, so a bunch of the pictures didn't have coordinates
My blog editor, MarsEdit evidently does the same stripping when altering the resolution of an image upon upload
Later I'd discover another...
No problem, I thought: I have the original master photos, all safe in LightRoom. Well, yes and no. Yes I have the originals, but no they're not easy to find: very often the file names where changed. This was done with the best of intentions: 'IMG0456.jpg' is not as friendly as 'wat-pho-statue.jpg', but it make the job of tying blog images without GPS metadata to their originals with GPS metadata much harder.
In the end, I implemented a brute-force solution involving 4 different ways of finding the matching local images:
By mining the MarsEdit plist file containing details about the media upload transactions
While name, date and size are obvious matching characteristics, it's worth describing perceptual hashes and the MarsEdit lookup.
Perceptual Hash
Perceptual hashing is a technique which summarizes an image into a hash which has the nice property of hashing similar-looking images into the same hash bucket. This is in contrast with typical hashes which do not take the human perceptual system into account and simply hash 'similar' (in whatever way the algorithm considers similarity) sequences of bytes into the same bucket.
For example, if you change the resolution of a image, you'll likely completely change the stream of bytes and so a standard -- say MD5 -- hash will be completely different. A perceptual hash is designed so that the changed image's hash will either the same value or something very close to it in terms of Hamming distance (i.e. the number of bits you'd need to flip to turn one number into the other). The same applies to other common image manipulations: monkeying with the colors, small rotations, some cropping.
phashion is a Ruby library I used to generate the perceptual hash for each image. It was incredibly easy to use, though for the volumes of images involved I used it to generate the hash only, then saved the hash in the database. In a later operation, a script would read the database and compare hashes using the Hamming distance.
MarsEdit
Many of my image uploads went through MarsEdit's media uploader. This is a great feature, but it defaults to resizing the image. This isn't necessary since WordPress generates all the thumbnails now itself, but it had the effect of changing the file sizes. Worse, I prefer to have a human-readable name instead of the camera default, so the names were different.
As a shot in the dark, I decided to see if MarsEdit help a history of these renames. Luckily, it did. Even better, the support forum gave me all the information I needed very quickly. It's such a so great to have such incredibly support from independent software folks. It's a standard I'd like to live up to in all my work.
Final Algorithm
So the final algorithm to find local master copies for all the blog pictures is this:
backtracker.rb uses the RubyPress WordPress client API to go through all the blog posts, get each image in each blog post and get the first 128Kb of each image to search for the EXIF data. It puts all the results in the JSON file markers.js1.
find_local_blog_images.rb goes through all the local masters and collects all the characteristics, including perceptual hash and puts them in a Sqlite database
find_missing_gps_info.rb loads markers.js and examines every image lacking GPS data and attempts to find a local copy from the database using all the various characteristics.
This performed much better than backtracker.rb alone, as the output of find_missing_gps_info.rb shows:
Images already having GPS 56
Images found by create date: 106
Images found by MarsEdit map: 188
Images found by name and size: 1
Images found by perceptual hash 424
Images having GPS totals:
already had it: 56
found it locally: 199
didn't find it: 212
Posts having GPS totals:
already had it: 13
found it locally: 50
final count without: 22
Even with all this effort, the number of posts without any location information was still much higher than I'd expected. There were only a few posts with stock photos or none at all, so most of them should have had location information.
At my wits end, the only thing left do to was to brute force it: examine a bunch of posts I knew to have lots of photos with locations and figure out why they weren't showing up. And that's where the last piece of the puzzle was revealed: large numbers -- maybe up to 50% -- of my pristine master photos didn't have GPS data.
Bummer. I suppose my cameras aren't entirely reliable at getting a GPS fix. And I know I'd had the habit at some times of turning on airplane mode on my phone to save battery and data connection usage. This also turns off the GPS.
Halfway There...
At this point, I found myself in a very common place for software tinkers: I'd spent a huge amount of time solving a problem I hadn't even known was a problem. I started further behind than I knew. The only benefit gained from the unanticipated effort was a bunch new things learned, things which might come in handy at some point down the road: more familiarity with Ruby, experience with perceptual hashing of images, sqlite programming, EXIF data (did you know you only have to GET the first 128kb of an image to be sure to get its EXIF data?), etc..
Regardless, now I have this shiny new GPS data. It's time to go back to the original point of this effort and actually use it.
Map Table of Contents
The most compelling feature I wanted to add to my blog was an interactive table of contents showing the itinerary of "trip 1" and all the blog posts at the various locations along the way.
Which Location?
Since posts can have multiple images and hence multiple locations, I needed to decide how to handle these: do I collapse them all into one point or have each post generate multiple markers.
For simplicity's sake, I chose to have one location per post. In order to collapse multiple locations to one, I wrote some Javascript to find the mean location of a series of points. This was also required to determine the initial viewport of the Google map, so that it was nicely centered around all the points on the map. Here's the gist:
First Approach
The simple approach is to use the google maps API to create a map, then place a marker at each location which contains a link to the blog post at that location.
This breaks down very quickly if you have more than 1 post at any one place: from a map expansive enough to contain both Thailand and Indonesia, any marker anywhere around say Bangkok will be in the same place. You'll have to zoom in to see the posts.
Another problem is performance: rendering 70+ markers is not fast. It's not impossibly slow, but it's enough of a problem to warrant seeking other solutions
The second approach is to cluster the posts for one location. [Fluster][fluster] is a Javascript library that lets you do that. It's always distressing when the last commit to a project in github was in 2011, but Fluster worked fine. Its algorithm is straightforward: start with a point at random and cluster all the points which fit within the size of its icon. Then show a marker with the number of sub-markers clustered there.
The Fluster solution worked fine. But I also wanted to show the itinerary. So I needed to cluster both by geographical location and by time.
Since I wanted to cluster posts which were not only closely in space but also in time, I needed to control the clustering function. My first stop was k-means, which I've loved since grad school for its intuitive methodology. For this purpose, though, it has a big flaw: you have to guess the number of clusters. While I could have come up with an estimate (say by counting the number of unique cities visited), I liked the idea of the algorithm doing that work for me.
Enter DBSCAN ("Density-based spatial clustering of applications with noise"). I forked a Ruby implementation to github and cleaned up a couple of implementation quirks with which I disagreed. The code worked great and gave me some smart clusters when I provided it a "specio-temporal" objective function. That's just a fancy way of saying I wanted clusters of events closeby in space and in time. For example, two visits to Chiang Mai in different months should be two separate clusters. Clustering this way lets me thread a route through he clusters simply by sorting them in time.
(The 3-tuple of each point is [latitude, longitude, seconds])
By modifying the $distanceMultiplier global and re-running the algorithm on each cluster to generate sub-clusters, I can create a hierarchy of clusters to be shown at appropriate zoom levels on the map. So the large-scale maps show a reasonable number of markers -- and those markers don't overlap -- but one can always zoom in to a finer scale to look at all the maps.
Integrating the Map Into WordPress
The last bit of work was to put the single html page into the WordPress site as a Page. This involved figuring out how to create a sub-theme which let me inject some functions to add the Google Maps API and other script references to the <head> of only the route map pages (after all, why load all that for every page?).
Conclusion
So [there][sea-map] it is, the pretty routemap I wanted, produced by a lot of effort. First I decided to learn Ruby, then figure out the WordPress API enough to enumrate all the photos, then how to associate the uploaded photos with the local masters to get the GPS info, then determine a good clustering algorithm to find the routes, then learn enough of the Google maps API to render it nicely.
That's enough for V1 and way way too much for a single blog post. Yet there's no much more to do: what about the location information for places mentioned in the post? How can I get this to update in the future when I add more posts? I guess those are a future project...
1: Not really JSON, actually Javascript because I was too lazy to write the AJAX query to load in a actual JSON file.
The first leg of my travels is over. Money is tight, so I was naturally curious to compare the costs of the various countries I visited. I also wondered if I'd come close to my budget, or just blown past it.
This analysis could have been handled entirely in Google Sheets. Google Sheets1/ is where I kept track of the expenses so that would be entirely natural.
But built-in charts aren't exactly what I wanted. I wanted more control over the presentation and some specific interactions designs. Besides, I wanted to refresh my skills after months away from code so the cost in time of this extravagance was worth it.
So it was that I created some d3.js visualizations of my expense data and published them on the web. Here's now.
Design
The web page needed to import the expense data from Google Sheets and render it in charts. The most salient information is:
Total spend and average spend per day, per category and per country
Top expenses
Trend of those expenses over time
Importing from Google Sheets
There are a couple of options for importing data from Google Sheets. I wanted the solution to be live-updating so I could use this page in future travels for daily monitoring. Any export of the data was thus ruled out.
If you make the sheet public, you can use a client-side approach to querying the data via JSON, as described here. This is a cool technique and I almost used it, but I didn't want this data to be public in its raw form. Not on any 'real' grounds — I'm not fearful of the security implications of the world knowing by spending habits — but rather as a matter of principle: I'd rather design something more general and in general one wouldn't want to share all the raw data all the time.
So that left a server-side solution, something to authenticate in private and then proxy the data over to the client. I found this excellent writeup on how to do so from node.js, which I've been meaning to play with some more, so I ended up implementing that solution pretty much wholesale.
As is typical of any visualization project, the first and lamest part of the work is mangling the data from its original format into one you can use. In the case of this project, Google Sheets provides
Now the server design was party clear: a node.js JSON endpoint to serve the data. Since I was running the express web server to do this, it was a simple matter to include a separate port to serve some static HTML pages for the client-side pieces also.
The module travel-data.js loads the spreadsheet and exports the data as budget_info. Then we set up two routes in express, / for the static site and /data for the JSON data.
To visualize the data, I settled on two views: a daily bar chart with lines for the moving averages of the spends in each category and a top expenses. These two views operate on one country at a time, selected by an option box. In the future there could easily be a "all" countries mode which shows the views for all the data.
Switching into the "top expenses" mode is accomplished with a button; you get out of it by clicking anywhere on the page. This turned out to be much more natural than a toggle button or two buttons, especially on touch interfaces. This approach is akin to a modal dialog box.
To give some visual cueing to tie the two views together, I decided to transition the bars representing the expenses from the bar chart to the stacked list of bars representing the top expenses. So the biggest expenses fly out from their homes in the bar chart and stack in the center of the screen with their descriptions.
Responsiveness
Any sane web development today needs to work perfectly on mobile devices. I'd rather argue the side which says everything should mobile-only rather than desktop-only. So this visualization needed to be responsive.
The easiest option is the default: d3.js renders to SVG, which is itself scalable. So just do nothing and users can resize the drawing at will.
There are problems with this approach. Visual elements may fit on the screen and be perfectly rendered but they may also be tiny. Too tiny to read comfortably. The article "Building Responsive Visualizations in d3.js" elaborates on these problems and provides a solution:
Re-render the chart upon resize
Add or remove tick marks based on screen size
Add or remove datapoints based on screen size; rendering detail beyond the pixel level is wasteful and can also make the chart look "thick".
When the viewport gets too small, switch to sparklines to minimize clutter
I add to this a third stage between the two: when the viewport gets to small, remove the bar charts but keep the axes and the lines.
This approach entails moving a bunch of the geometry-specific code to a resize() function which gets called when the containing element changes size. resize() can then make a bunch of decisions about what elements to render based on the size of he viewport.
Resize() notes
In the example from the article, the new elements were re-rendered on resize by updating the scale and then calling the helper d3.js objects/functions to create/update the SVG elements:
/* Update the axis with the new scale */graph.select('.x.axis').attr("transform","translate(0,"+height+")").call(xAxis);graph.select('.y.axis').call(yAxis);/* Force D3 to recalculate and update the line */graph.selectAll('.line').attr("d",line);
This is great, but what do you do if you're making bar charts or rectangles? When there's no canned d3.svg.axis() or d3.svg.line() to simply generate the SVG attributes based on the data; one typically sets the attributes directly.
The solution I used was to split out the geometry attribute setting into separate functions, which then get call'ed in the resize function:
In the end though, the right design would be to save all geometry setting for the resize function, and make all visual elements rendered before resize() be invisible in the initial setup and simply avoid setting any of the geometric attributes upon creation.
Nuances
A few small visual nuances help make the rendering more pleasant:
The bar chart is animated so the expenses grow from 0. This is a nice transition into the chart.
In the "Top Expenses" mode, the background is blended, giving it a frosted-glass look to keep it in the viewer's mind but minimize distractions.
I originally wanted a moving average line for each expense category leading to the all-dates average at the right-hand side of the hard. There is no built-in moving average interpolation in d3.js, so I had to write one, cribbing heavily from this article.
I wanted to have the moving average series lines culminate in a label for the series. This makes more sense to me than the typical 'legend box' but was problematic: if the averages are close (e.g. if the responsive design causes them the average lines to be just a few pixels apart), the labels will overlap.
My first attempt at a solution was to try to be clever: use d3.js's force-directed layout to lay the labels out. The step function would constrain the x coordinate to stay put, leaving the labels to move gracefully away from each other along a vertical line. This worked, but the effect of the labels bouncing around at page load time was distracting.
The second attempt was less elegant, but worked better. Query the sizes of the labels and offset them if they overlap. This was faster, conceptually simpler, but made for messy code. The visual effect was better however so that's the one I went with.
Visualization
That's it. The visualization is here. Here's what it looks like:
There are a series of insights from the visualizations here.
1/ Why Google sheets and not Mint? Most of the places I visited were cash-only and used multiple currencies. If was easier to track that it in Sheets
I've been blogging during my extended travels. This means my usual clunky vacation photo management workflows are getting strained to their limits as I'm constantly going to interesting places to take pictures and write about them. There is no "after I get back" time to lazily dump work into. And any time I waste futzing with my computer is time I'm not spending enjoying the marvelous places I'm visiting.
So it behoves me to take some of the technology-focus from my old life and attempt to make this part of my new life flow more smoothly.
Tools
There's a lot to write about in the workflow — capture with my Sony super zoom and iPhone, management with LightRoom, wrangling photos from one place to the other with Dropbox and cables, selecting the images, writing and publishing the blog posts using MarsEdit.
But that's not what I want to cover here. I want to cover a small corner of the process, how to publish quick photo-essays to WordPress.
The Problem
Here's what I want to do: I want to come back from an event, import all my photos into LightRoom and then publish from LightRoom to my blog as quickly as possible. There are a bunch of plugins for LightRoom which claim to do the job, but they cost money and/or don't work with LR 4, which is what I use.
I've got a little scripting skill, so maybe I can scratch my own itch.
My ideal workflow is this: take a series of photos, add little blurbs to each one, write a small introduction and post. I can easily select a set of photos in LightRoom and export them. I could then bulk import them into the blog.
That's where the friction comes in: I'm left with a set of files that I need to either manually add to a post and subsequently view so I can add a description, or I need to modify the metadata in LightRoom to make the subject matter clear from the filename.
Both of these options seem like wasted effort. What I want to be able to do is to set proper titles and captions in LightRoom and automatically generate a framework post with all the images titled, with long descriptions, and have those descriptions also appear in the body text. I can then quickly add an introduction and submit the post. The description of the images ought to exist in only one place so I'm not copying it around, and it's the best place for it: my main photo library in LightRoom.
The Solution
Unfortunately no one has a tool to do this. With nice Python WordPress and image metadata libraries, it should be straightforward to write something to do this, right?
Yes and no.
Tags and Tags
There are lots of Python libraries which let you read and modify the EXIF tags. Exifread is a single-purpose library for this, PIL (or its clone, pillow) has tag functionality built in also.
Trouble is that the 'Title' metadata of JPEG files is not in EXIF. Caption is, and we can use that for the long description.
Since 'Title' is such a commonplace word and the EXIF tags are the ones folks seem to be after, it took a lot of messing around with Google to discover that JPEG files have another set of metadata in them, the IPTC tags, and that's where the title lives.
The Script
With that mystery solved, it's a simple matter to write a script to enumerate all the image files in a folder, crack their title and caption, upload the images to WordPress and generate a draft Markdown post using the new image URLs and the caption information.
Sending an email of a webpage is pretty easy. That is, it's easy if the page is static or you're generating the webpage on the server (e.g. with PHP or ASP.NET). If it's not -- if you have some client-side code which alters the page -- then it's not so easy.
This article discusses the paths I went along to mail pages from a fancy new HTML5-based client side web app in a Windows environment. The pages themselves were relatively static, but they were rendered using Javascript based on data loaded via AJAX queries.
Fortunately, we have the ideal tool to render web pages: the browser. All major browsers are scriptable to some degree. To send our page we'll use IE (10, specifically) and script it using powershell since we're on Windows.
$ie=new-object-com"InternetExplorer.Application"$ie.visible=$true$ie.navigate2($url)# Wait for the page to start renderingStart-Sleep-MilliSeconds5500$doc=$ie.Documentif(-not$doc){Write-Error"Browser document is null";exit(0);}# Wait for the page to complete renderingwhile($ie.Busy-or$ie.Document.readyState-ne"complete"){Start-Sleep-MilliSeconds100Write-Host$ie.Document.readyState}
One unpleasant requirement of this method is that you must be running as administrator. If you don't, you'll find the document member of the browser object to be null.
Lack of Style
The first approach has one evident flaw: no styling. The document we get from the IE DOM contains only the body and not the head elements. Scripts we can do without; they're not going to be executed by the email client anyway. But without stylesheets the page will be ugly.
You'd think that with Microsoft's massive leaps forward in web standards conformance with IE that Outlook would use the same rendering engine and have no problems with basic CSS3. You'd be wrong.
So now we have to rewrite the webpage to use fewer bells and whistles, right? Not necessarily. We already have a browser up with our page and it certainly knows how to render the page. Can we pass its knowledge along to Outlook?
With a bit of a hack we can. The particular problem is that Outlook isn't rendering the CSS selectors properly. Instead of relying on CSS to style the page, we can inject a script into IE that overwrites the style attributes with the computed style. Essentially we're fixing the style into place. This will bloat the page of course, but it'll render correctly.
So let's do that, and also remove any script:
// Remove all the script on the page; we don't need to email it.(function(){jQuery('script').remove();})();$('body').append('<div id=""defaultElement""/>');varhard_code_attributes=['background-color','color','font-size','font-family'];vardefaultElement=$('#defaultElement').get()[0];// add the computed style to the style attribute for every element.// this prevents incorrect rendering with viewers which can't handle CSS3 (i.e. Outlook).jQuery('*').each(function(e){varnewStyle='';for(vari=0;i<hard_code_attributes.length;i++){vara=hard_code_attributes[i];if(this.currentStyle&&this.currentStyle[a]&&defaultElement.currentStyle&&this.currentStyle[a]!=defaultElement.currentStyle[a]){newStyle+=a+':'+this.currentStyle[a]+';';}}this.setAttribute('style',newStyle);});
Final Stumbling Block: SVG
Next we have some pages with SVG. This seems to be another area where Outlook's HTML mail rendering has difficulties.
Fortunately, there's a solution. Yet another messy solution, but one that works. Google has a wonderful library called canvg which renders SVG into a canvas. A canvas can then be exported to an image file as a data url:
$('body').append('<a href=""$($url)"">link</a>');canvg();// Render all the canvases as images.$('canvas').each(function(d){varimg=this.toDataURL('image/png');varii=$('<img src=`"'+img+'`"/>').attr('src',img);$(this).replaceWith(ii);});}
One final snag: Outlook can't render large data urls. To get over size restrictions on data urls, we'll need to extract the base64 encoded data url into separate files and include it them as attachments.
$attachments=@();# Outlook won't render data url images as large as we need, so take the turn the data url into a separate file.$imgNum=1;foreach($imgin$ie.Document.getElementsByTagName("img")){$imgFileName="img$($imgNum).png";$imgNum++;$t1=$img.getAttribute("src");$txt=$t1.Replace("data:image/png;base64,","");$img.setAttribute("src","cid:$($imgFileName)");$bytes=[System.Convert]::FromBase64String($txt);$decoded=[System.Text.Encoding]::Default.GetString($bytes);[Byte[]]$bytes_imagefront=[System.Text.Encoding]::Default.GetBytes($decoded)set-content-encodingbyte$imgFileName-value$bytes_imagefront$attachments+=$imgFileName}
Now all the pieces are in place, and we're ready to actually send the email.
Lanyon is an unassuming Jekyll theme that places content first by tucking away navigation in a hidden drawer. It's based on Poole, the Jekyll butler.
Built on Poole
Poole is the Jekyll Butler, serving as an upstanding and effective foundation for Jekyll themes by @mdo. Poole, and every theme built on it (like Lanyon here) includes the following:
Lanyon is by preference a forward-thinking project. In addition to the latest versions of Chrome, Safari (mobile and desktop), and Firefox, it is only compatible with Internet Explorer 9 and above.
Download
Lanyon is developed on and hosted with GitHub. Head to the GitHub repository for downloads, bug reports, and features requests.