Tonight I came across an issue in gitbook. First of all I want to say that it’s a wonderful piece of software and they’re trying super hard to build a great service out of gitbook.com to empower self-publishers, but this curious bug had me wondering quite a bit.
My Linux Mint 18 today was refusing to wake up from Suspend/Hibernate after I had closed the lid. It’s a little odd, because I went for the cinnamon and all other niceness installation variant, an image that wouldn’t even fit on a 2GB USB flash drive.
The only thing I apparently had to do was to open the driver manager and install the graphics drivers, followed by an
apt-get update && apt-get dist-upgrade
Few weeks ago we decided to switch our static site to GHOST and write a theme for it. Now, GHOST isn’t an optimal choice for a photography portfolio and we’ll get into why that’s the case in a bit, but we both like blogging to, so we wanted to give it a shot.
In the last post we had a look at how to create an express app with jwt authentication.
Why No Framework?
I wanted to write a post simply illustrating how everything works and how easily this can be achieved without any frameworks in place. It’s just another layer of abstraction you have to learn if you’re just trying to build some sort of login for your web app.
This post is going to be about creating an authentication with JSON Web Tokens for your project, presumably an API that’s going to be used by Angular, Vue.js or similar frontend frameworks. We’re going to send the jwt with every request, meaning that we don’t rely on sessions, but simply put the token on every request we make to the API. This way you don’t have to worry about cookies, but you can save it in
localStorage or other places on the frontend.
In essence this tutorial will go through:
- creating a
/loginroute to acquire a token
- creating a
/secretroute, that only is available to logged in users with a JSON web token
If you’re curious about the final result and don’t want the step by step guide, check out the final jwt express gist.
Angular 2 is out and if you haven’t, you should definitely check out angular.io, because the most valuable beginner material is in their Getting Started guide and the following Tour of Heroes.
Angular projects require quite a bit of setup with Angular 2, unlike most of Angular 1, which is a little inconvenient if you just quickly want to try out a couple of things and test the framework to see if it fits your needs, luckily there is a
angular-cli that creates a boilerplate project for you whenever you feel like it.
Getting started with a fresh Angular project is pretty easy using angular-cli. Basically it takes care of initialising an Angular project including Typescript, webpack bundling and development server support.
npm -g install angular-cli ng new awesome-project cd awesome-project npm i ng serve
I recently came across this error message on one of my virtual private servers, where I was trying to have a closer look at the traffic. vnstat is a really cool tool for that sort of thing and on most Linux servers you should be able to install it with
sudo apt-get install vnstat.
I was running
vnstat -l and got the following response:
getting traffic...Error: Unable to get interface "eth0" statistics. Error: Interface "eth0" not available, exiting.
Reddit, the self-proclaimed front page of the internet can be a great source to drive traffic to your blog and get feedback. I only started being active in some of the subreddits that are relevant to programming and photography in the recent months, but it’s been a great experience, especially because of all the cool comments and critique of my posts!
How to use reddit or any social network for that matter for traffic is probably a hot topic anywhere, but I want to get into more than just gaining visitors. I actually have gotten some of the most valuable feedback as a blogger from posts on reddit and hacker news.
My most clicked post was a political piece about Obamas comment on allowing phones to be accessed by government agencies without restriction. Most hits came from Hacker News, but reddit was a factor too.
The framework I’m using is superior to yours!
discussions en masse.
I just want to give a quick overview of what I think of the matter, because many people have some good points, but my TLDR; version would be: do whatever you want, but don’t tell other people what they should do.
Web scraping is practically parsing the HTML output of a website and taking the parts you want to use for something. In theory, that’s a big part of how Google works as a search engine. It goes to every web page it can find and stores a copy locally.
For this tutorial, you should have go installed and ready to go, as in, your
$GOPATH set and the required compiler installed.