Auto Hosting Drop Blog Images On S3

As I discuss on the DropBlog project page, one key area that is a pain to deal with when blogging (especially about technical things) is hosting various images, aggressing them, and distributing them in a clean way.

The Problem

When I blog, I have several images from different sources. There might be art files, diagrams, pictures I've taken, or simple screen captures. Generally, I can use something like Skitch to share the hosted image through public Evernote hosting, but I have been burned before when I accidentally deleted some images, and then they were broken on the blog.

Furthermore, with my existing blog, I have a combination of poorly organized images in the source repository, as well as others hosted elsewhere. There needs to be a consistent, better way to quickly organize and store pictures, and make them available for use by the application.

Proposed Solution

Because I already document things along the way, and even use the photo upload and some IFTTT recipes to auto save things like Instagram pictures and Facebook uploads to Dropbox, really it should just be a matter of dragging project or article images into the respective folders, and then referencing them relatively while authoring.

This way, I'll see a preview of the image in the Markdown file, and use basic Markdown syntax: ![](../path.jpg)

Also, because the paths should always begin with a relative modifier ./ or ../, it will be simple for the Markdown processor to do these things:

  • Hey this is an image...
  • Oh, it looks like it is a relative image...
  • Let me take the image slug or file name and use it to query for an associated image for this article or project...
  • Hey, I found that image record, and here is a path to some CDN backed public store that will serve it super fast...
  • Actually, I'll just output that path, right here.

I can also still parse normal linked images, both for backwards compatibility, or if I want to simply show an image hosted elsewhere. Any non-relative start to a file path should just be displayed, with no lookup. This might be valuable for displaying some Google image for example, where I'd want it to update if I change the original. My goal, however, is to have the majority of images stored related to the projects and articles.

Behind the Scenes

Well, this is all well and good on the display side, but what about the heavy lifting on the DropBox and S3 side? This is the basic progression I see happening:

  • DropBox hook kicks off a Delta check on the blog root folder as usual because an image has been added.
  • The software will know which project or article goes with which related image by the slug in the path.
  • The software will acquire the image from DropBox.
  • Then, we upload the image to an S3 Bucket and get the CDN path back.
  • Finally, we create an image association to the related article or project, and store the path in the database.

Modifying the image path should destroy and remake a new image; as with the projects and articles, similarly, updating the file should trigger some sort of update to the image record.

This could allow for future pre-processing of the image, either for compression, a standard size, or doing fancy things like pulling out the image Exif data to auto populate captions or something.

As with this whole project, the idea is to have as few duplicative files and content as possible. If images need captions, why not store them with the image?

Hopefully, this will not be too complicated to build.

If you enjoyed reading this or learned something, please consider sharing via , , or . Thanks!