28/11/2018 18:56 GMT

I’m used to working with Windows and Linux, I needed something that is cross compatible with those operating systems, I did some research on embedding asset and pinning them to one file, I had a look at a few existing solutions online, but I just didn’t like any of them, they often rely on using go generators to embed the asset, I honestly thought that approach was very sloppy.

So I decided to build an my own solution and came up with ZipFS, which happens to based classic and proven Zip Archive format , I thought this was simple yet brilliant. How does it work? It’s quite simple, first you create the zip archive with all the asset in, with compression disabled and than append the content of the zip file to the end of the compiled application, which is very easy to do, all you have to do is run the follow Unix command.

$ cat asset.zip >> application

Provided you got zipfs.InitZipFs("asset.zip") in the source code of the application.

All the asset you see on this site, for example favicons, *.css and *.js are served with ZipFS, so if you can see it, you know it’s working. I mean it’s works so well, I decided to stop using Docker containers for my web application, as ZipFS is now doing the isolation and also because Docker has issues with IPv6 at the moment.

I think it’s awesome, that I use some legacy solution to simplify the entire site to one file, therefore making deployment a breeze, oh I mean it’s very quick with SCP, so satisfying and very easy to automate.

Special thanks to Rémy Oudompheng, Mechiel Lukkien and Derek Parker, without them, I couldn’t came up with ZipFS. Of cause it’s not a new idea, it’s been done before, but still awesome!

P.S. Those PHP and Node users think I’m the crazy one!

14/10/2018 13:35 BST

I’m really not looking forward to writing this, or am I, anyway but here goes the rant!

What the fanboys want you to know about NodeJS?

  • Its JavaScript, the ability to work with the same language on frontend and backend.
    • Like being a polyglot a bad thing, I beg to differ. I know Go, PHP, SQL and JavaScript, I had exposure to Python, Ruby, Java, C and C++ and I had thought about learning C# and Rust.
  • That its fast and highly scalable.
    • This is very debatable. If I read correctly, its good with IO but not with computation scalability. Why not pick a language that is good with both? Like Go for example.
  • That its very easy to write code with.
    • Maybe for a small to medium application, but what about a very big application?
  • That it’s highly accessible.
    • That is true, I can’t argue with that. JavaScript is practically in every web browser.
    • Keep in mind you can use any language with any VPS such as DigitalOcean and there is also Web Assembly, that currently in it’s infancy, but will get better with time.
  • That it’s hugely popular and has a good ecosystem.
    • But popularity does not make it a product of exceptional quality, a highly popular product can be badly thought out. This logic can be applied to other product such as MySQL and PHP 😁
    • Good ecosystem? I thought the ecosystem was a game of Jenga; I get to the package manager in the next section.
  • They like to point out that there are big companies using NodeJS, such as PayPal, Walmart, NASA and Uber.
    • Yet, there is barely any job opening for nodejs in those companies, there mainly job opening for Java Dev, even if there were opening for node, it’s mainly for frontend. 😂😂😂
20/09/2018 18:41 BST

A little while ago, I have implemented a JSON feed for my blog entries, I could of also implemented RSS and Atom as well, but I really want to avoid using XML because of its complexity so instead of that of just have JSON feed, to be frank I hate XML with a passion.

It’s live at https://www.cj-jackson.com/feed.json all working nicely. It’s was easy to implement, soo easy I can show you the source code in Go.

package frontJsonFeed

import "time"

const (
	Version1 = "https://jsonfeed.org/version/1"

	ContentType = "application/json"
)

type Feed struct {
	Version     string `json:"version"`
	Title       string `json:"title"`
	HomePageUrl string `json:"home_page_url"`
	FeedUrl     string `json:"feed_url"`
	Items       []Item `json:"items"`
}

type Item struct {
	Id            string     `json:"id"`
	Url           string     `json:"url"`
	Title         string     `json:"title"`
	ContentHtml   string     `json:"content_html"`
	DatePublished time.Time  `json:"date_published"`
	DateModified  *time.Time `json:"date_modified,omitempty"`
}

func CheckTime(t time.Time) *time.Time {
	if t.IsZero() {
		return nil
	}

	return &t
}

If I build an API, it’s will also be in JSON, as it’s is the future, I would rather stop idling with the past, that is XML.

I left a hint on one reason of the reasons why I would never consider buying a Mac!!

09/09/2018 20:27 BST

I thought I shared this video, the way how Kat Zien explains the structuring the code is quite similar to how I structure my code.

I honestly did not think about the Hexagonal Architecture while I was writing well structured code and it is pretty much inline with that architecture, in terms of code maintenant.

I do agree with Kat on using init function, they can cause issues especially while running the test, it’s can accidentally initialise the database connection especially if you’re using continuous integration such as Travis CI, where is no database, there two ways on getting around that either you use build tags or better yet avoid using init functions, with something like a context system like ctx (That the tool I use for dependency injection for this site, it’s pretty cool, no yaml or xml to worry about, like you do with symfony, sorry couldn’t help it. 😀)

As for mocks instead of using subpackage, I would rather use a new build tag // +build debug and use a double extension *.mock.go, I find it’s cleaner that way; I also use that convention with sql *.sql.go and html *.html.go so I can easily identify where html and sql are later on without the build tag of course. 🙂

The biggest API I’m using on my site is redis, using that api for the entire site would of been a very bad idea in my humble opinion, because I would find it’s very difficult to create a mock for that API and also if they stop maintaining the API, than I would have to replace the API for the entire site, that not fun, so instead I created a small API in front of the big API, than use the small API for the entire site and it’s also easier to create a mock for the small API and if they do stop maintaining the big API it’s will not have big impact, because I only have to update the small API and that it. 👍

I find writing good code, self documenting, maintainable and the most important part fun, fun fun! Who can’t say no to that? 😉

| |