How to handle file uploads using actix-web

In this tutorial I’ll demonstrate how to handle upload with additional data fields using one of the most popular Rust web frameworks - actix-web, which has become my go-to web framework when developing in Rust.

We’ll start by creating a binary Rust package

1
cargo new doc-demo

Then under the project root, run

1
cargo add actix-web actix-multipart

With your favorite editor, open src/main.rs and copy/paste the following code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
use serde::Serialize;
use actix_multipart::Multipart;
use futures_util::TryStreamExt as _;

use actix_web::{ post, App, Error as ActixError, HttpResponse, HttpServer };

#[derive(Serialize)]
struct Stats {
lines: usize,
#[serde(skip_serializing_if = "Option::is_none")]
word_count: Option<usize>
}

#[post("/upload_stats")]
async fn upload_stats(
mut payload: Multipart,
) -> Result<HttpResponse, ActixError> {
let mut file_data = Vec::<u8>::new();
let mut layout: Option<String> = Some("simple".to_owned());
while let Some(mut field) = payload.try_next().await? {
let content_disposition = field.content_disposition();
let field_name = content_disposition.get_name().unwrap();
match field_name {
"file" => {
while let Some(chunk) = field.try_next().await? {
file_data.extend_from_slice(&chunk);
}
}
"layout" => {
let bytes = field.try_next().await?;
layout = String::from_utf8(bytes.unwrap().to_vec()).ok();
}
_ => {}
}
}
let file_content = std::str::from_utf8(&file_data)?;
let mut i = 0;
let mut word_count=0;
for line in file_content.lines() {
word_count+=line.chars().count();
i += 1;
}
let word_count_res = if layout.unwrap() == String::from("advanced") {
Some(word_count)
} else {
None
};
Ok(HttpResponse::Ok().json(Stats {
lines: i,
word_count: word_count_res
}))
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(move || {
App::new()
.service(upload_stats)
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}

This simple web application has only one single POST endpoint that will accept

  1. a field named file that points to a file in client’s file system
  2. an optional field named layout, its value is defaulted to simple

By default the output will be the line count of the file being uploaded, but a characters result that represents the number of characters in the file will be added if layout is set to advanced. So for example,

1
curl http://localhost:8080/upload_stats -X POST -F 'file=@Cargo.toml'

returns something like

1
{"lines": 13}

While

1
curl http://localhost:8080/upload_stats -X POST -F 'file=@Cargo.toml' -F 'layout=advanced'

might produce something like

1
{"lines": 13, "characters": 311}

p.s. here’s the full content of Cargo.toml

1
2
3
4
5
6
7
8
9
10
11
12
13
[package]
name = "doc-demo"
version = "0.1.0"
edition = "2021"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]
actix-multipart = "0.4.0"
actix-web = "4.1.0"
futures-util = "0.3.21"
serde = { version = "1.0.136", features = ["derive"] }
serde_json = "1.0.81"

Add error code(s) to Hapijs output

Joi validation is powerful and easy to work with, however it’s not always obvious or easy to add stuff like error code(s) to the hapijs response. This post will show you a way (or two) to deal with that problem.

Step 1, assign error codes to each validation error

Quick example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
server.route({
method: 'POST',
path: '/person',
options: {
validate: {
payload: {
firstName: Joi.string()
.min(5)
.max(10)
.required()
.error(errors => {
errors.forEach(err => {
switch (err.type) {
case 'any.empty':
case 'any.required':
err.message = 'Firstname should not be empty!'
err.context = {
errorCode: 111
}
break
case 'string.min':
err.message = `Firstname should have at least ${
err.context.limit
} characters!`
err.context = {
errorCode: 121
}
break
case 'string.max':
err.message = `Firstname should have at most ${
err.context.limit
} characters!`
err.context = {
errorCode: 131
}
break
default:
break
}
})
return errors
})
}
}
},
handler: async request => {
// todo: handle saving the paylod
console.log('to save', request.payload)
return { result: 'ok' }
}
})

Step 2, customize failAction when creating Hapi server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
const Boom = require('@hapi/boom')
const server = Hapi.Server({
// ...
routes: {
validate: {
failAction: (request, h, err) => {
const firstError = err.details[0]
if (firstError.context.errorCode !== undefined) {
throw Boom.badRequest(err.message, {
errorCode: firstError.context.errorCode
})
} else {
throw Boom.badRequest(err.message)
}
}
}
}
})

Step 3, customize response

The reason this step is needed is because Hapi would strip out the injected errorCode attribute created by step 2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
server.ext('onPreResponse', (request, h) => {
const response = request.response
if (!response.isBoom) {
return h.continue
}
const { data } = response
if (data !== undefined) {
response.output.payload = {
...response.output.payload,
...data
}
}
return h.continue
})

With the above setup, request

1
curl http://localhost:3000/person -d ''

would result in

1
{"statusCode":400,"error":"Bad Request","message":"child \"firstName\" fails because [Firstname should not be empty!]","errorCode":111}

By default abortEarly: true is set Hapi, if multiple error codes are desired, only Step 2 needs to be adjusted to

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
const server = Hapi.Server({
// ...
routes: {
validate: {
options: {
abortEarly: false
},
failAction: (request, h, err) => {
const errorCodes = err.details
.map(e => e.context.errorCode)
.filter(e => e !== undefined)
throw Boom.badRequest(err.message, { errorCodes })
}
}
}
})

request

1
curl http://localhost:3000/person -d 'firstName='

would return

1
{"statusCode":400,"error":"Bad Request","message":"child \"firstName\" fails because [Firstname should not be empty!, Firstname should have at least 5 characters!]","errorCodes":[111,121]}

If you need to add error code in your application code, you can simply achieve that by returning a Boom like the following

1
return Boom.badRequest('Your error message here', { errorCode: YOUR_CODE })

I’ve composed a gist in case you want to save some typings in trying out the code. Cheers!

How to test nsq with docker (and nsqjs)

Preparations

  1. Find out docker host’s IP and create the following shell function. Replace 172.17.0.1 with the one found on your system:
1
2
3
4
5
6
7
8
9
10
export DOCKER_HOST=172.17.0.1
runnsq() {
host=$DOCKER_HOST
docker run --rm --name lookupd -p 4160:4160 -p 4161:4161 -d nsqio/nsq /nsqlookupd
docker run --rm --name nsqd -p 4150:4150 -p 4151:4151 -d nsqio/nsq /nsqd \
--broadcast-address=$host --lookupd-tcp-address=$host:4160
docker run --rm --name nsqadmin -p 4171:4171 -d nsqio/nsq /nsqadmin \
--lookupd-http-address=$host:4161
}

  1. Get nsq
    1
    docker pull nsqio/nsq

Start the containers

1
run-nsq

Publish a message **

1
curl -d 'hello world' "http://$DOCKER_HOST:4151/pub?topic=sample_topic"

Set up a consumer

1
2
3
mkdir -p ~/projects/test-nsq
cd $_
npm i nsqjs

Create consumer.js as listed below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

// consumer.js
const nsq = require('nsqjs')

const reader = new nsq.Reader('sample_topic', 'test_channel', {
lookupdHTTPAddresses: `${process.env.DOCKER_HOST}:4161`
})

reader.connect()

console.log('started')
reader.on('message', msg => {
console.log('Received message [%s]: %s', msg.id, msg.body.toString())
msg.finish()
})

Test

1
node consumer.js

You should see something similar to:

1
Received message [0abc62e587053000]: hello world

nsqadmin

Open http://127.0.0.1:4171/ from a browser.

** The reason why publisher runs before consumer is for the topic to be created before a consumer can subscribe to it.

Create a mongodb cluster using Docker with authentication enabled

Preparations

The following needs to be run only once

1
2
3
4
5
mkdir -p ~/docker-storage/{rc1,rc2,rc3,mongo-keys}
openssl rand -base64 741 > ~/docker-storage/mongo-keys/keyfile
chmod 600 ~/docker-storage/mongo-keys/keyfile
sudo chown 999 ~/docker-storage/mongo-keys/keyfile
docker network create mongo-cluster

Reusable scripts

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
start_cluster() {
for i in 1 2 3; do docker run --rm -p 3000$i:27017 --name rc$i --net mongo-cluster -v ~/docker-storage/rc$i:/data/db -v ~/docker-storage/mongo-keys/keyfile:/opt/keyfile -d mongo mongod --keyFile /opt/keyfile --replSet test-set; done
}

enter() {
docker exec -it $1 ${2:-bash}
}

runnode() {
[ $# -lt 1 ] && echo "Usage: $FUNCNAME script" && return
scriptname=$1
shift
others=$*
docker run -it --rm --name my-node-script -v "$PWD":/usr/src/app -w /usr/src/app $others node:8 node $scriptname
}

Put the above functions into your .bashrc or .zshrc or just a plain script file, i.e. util.sh and source it:

1
source util.sh

Bring up the cluster and enter rc1

1
2
3
4
start_cluster
enter rc1
# once inside rc1
mongo

Once in mongo shell,

1
2
3
4
5
6
7
8
9
use admin
db.createUser(
{
user: "superuser",
pwd: "supercool",
roles: [ { role: "userAdminAnyDatabase", db: "admin" }, "readWriteAnyDatabase" ]
}
)
quit()

You will be back to the bash shell of rc1, type the following to get back to first replica with the newly created credential.

1
mongo -u superuser -p supercool --authenticationDatabase admin

When mongo shell appears again, issue the following

1
2
3
4
5
6
7
8
9
config = {
_id: 'test-set',
members: [
{ _id: 0, host: 'rc1:27017' },
{ _id: 1, host: 'rc2:27017' },
{ _id: 2, host: 'rc3:27017' }
]
}
rs.initiate(config)

You should see

{ "ok" : 1 }
test-set:SECONDARY> 

Run rs.status() a few times you should be able to see rc1 becomes the MASTER node.

Now it’s time to create a regular user:

1
2
3
use admin
db.createUser({ user: 'appUser', pwd: 'appPass', roles: [{db: 'app', role: 'readWrite'}] })
quit()

Log in to rc1 mongo shell again with appUser

1
mongo -u appUser -p appPass --authenticationDatabase admin

Once in mongo shell of rc1

1
2
3
4
5
use app
db.list.insert([
{title: 'one'},
{title: 'two'}
])

Verify replication

To verify replication is working, enter rc2, login to mongo shell with the appUser credentials above, and run

1
2
use app
db.list.find()

The output would look something like the following:

test-set:SECONDARY> db.list.find()
{ "_id" : ObjectId("5c0f2181080076162e179f22"), "title" : "one" }
{ "_id" : ObjectId("5c0f2181080076162e179f23"), "title" : "two" }

Test with a real node.js project

If you are not satisfied, you can go on and create a simple node.js project to test the replica set:

1
2
3
4
5
docker pull node
mkdir -p ~/projects/test-mongodb
cd $_
npm i mongodb
vim cluster-auth-test.js

and enter (or just copy/paste) the following code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
const MongoClient = require('mongodb').MongoClient

const url = 'mongodb://appUser:appPass@rc1:27017,rc2:27017,rc3:27017/test?replicaSet=test-set&authSource=admin'
const db = 'test'

const main = async () => {
console.log('start')
const client = await MongoClient.connect(url, { useNewUrlParser: true })
const col = client.db(db).collection('list')
const res = await col.find({}, { limit: 5 }).toArray()
console.log(res)
await client.close()
console.log('end')
}
main()

Save it, and run it with

1
runnode cluster-auth-test.js "--net=mongo-cluster"

Result:

start
[ { _id: 5c0f2181080076162e179f22, title: 'one' },
{ _id: 5c0f2181080076162e179f23, title: 'two' } ]
end

Note: Since I am running node version 8 in my system, I put node:8 in the runnode script, you might need to adjust that to a different version if version 8 is not installed on your system.

mongodb authentication by example

Procedures

Follow instruction in reference #1 to create an administrator user

1
2
3
4
5
6
7
8
use admin
db.createUser(
{
user: "superuser",
pwd: "supercool",
roles: [ { role: "userAdminAnyDatabase", db: "admin" }, "readWriteAnyDatabase" ]
}
)

Create non-administrator users

Once the administrator is created, restart mongod with option --auth enabled, and connect to it using

1
mongo -u superuser -p supercool --authenticationDatabase admin

Let’s say we are going to have a new database named app and we need to create a user to access that. We can either issue use admin or use app before the db.createUser command. Here comes the first note about mongodb authentication: by issuing use app, it doesn’t mean the user (details) will be created in database app, instead, all users information will be stored in system.users collection of admin db. The result of command use admin or use app only serves as an identification purpose for non-administrator user creation, nothing else. Because of this reason and it might be a bit easier for user management, I would suggest that admin be used for all users. Therefore, run the following commands:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
use app
db.list.insertOne({
title: 'learn mongodb authentication'
})

use admin
db.createUser(
{
user: "appUser",
pwd: "appPass",
roles: [ { role: "readWrite", db: "app" } ]
}
)

To test if the user is created successfully, exit mongo shell and issue a new one

1
2
mongo -u appUser -p appPass --authenticationDatabase admin app
show collections

The last command should show the collection list created by superuser in previous mongo shell session. To ensure user appUser does have the read/write privilege in db app,

1
2
3
4
5
db.find()
db.list.insertOne({
something: 'else'
})
db.list.find()

Note: Since user appUser is configured to allow access to only db app, if you issue show databases command, only app would return, and that’s also the reason app needs to be specified in the mongo command.

A complete note.js example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
const MongoClient = require('mongodb').MongoClient

const url = 'mongodb://appUser:appPass@localhost:27017/app?authSource=admin'
const db = 'app'

const main = async () => {
console.log('start')
const client = await MongoClient.connect(url, { useNewUrlParser: true })
const col = client.db(db).collection('list')
const res = await col.find({}, { limit: 5 }).toArray()
console.log(res)
await client.close()
console.log('end')
}
main()

Note: option authSource is used to specify authenticationDatabase.

References:

  1. MongoDB Manual on authentication.

  2. SO entry on which authentication database to use

Ubuntu 18.10 installation notes

Ubuntu 18.10 was released not long ago and I decided to give it a try on my Acer Helios 300 (2018 version) and I couldn’t be happier with the result that I made the switch from Manjaro.

Hardware preparation

I opted to keep the default Windows 10 installation on the built-in nvme drive and install a spare 120G SSD drive to the ssd slot. By default BIOS has SATA mode set to Optane mode which would cause ubuntu not being able to see the drive. Change to ACHI mode instead. Secure Boot feature also needs to be disabled in the BIOS. The trackpad is set to Advanced mode by default in BIOS, change it to Basic because otherwise it won’t operate in Linux.

A note on backing up the disk to an image

If you want to do a whole system backup before making any changes. You can use Deepin Clone tool from [Deepin Live System] (https://www.deepin.org/en/download/). I used to use CloneZilla to make system backups but it no longer works on this laptop even with UEFI version of the tool - it just won’t show up in the boot menu.

Software preparation

Downloading the iso from ubuntu.com might be painfully slow. To speed up the downloading, use a bittorrent tool and download from here instead. I downloaded http://releases.ubuntu.com/cosmic/ubuntu-18.10-desktop-amd64.iso.torrent

“missing” touchpad right click button

Upon reboot after the installation, I found that the right button on the trackpad behaves exactly the same as the left button (or a single tap since I enabled that). A bit googling indicates that the two finger tap is the replacement. Therefore make sure tapping is enabled in touchpad and remember to use two finger tapping to mimic the old right click. It doesn’t take long before I get used to the new behavior.

install docker with the correct user/group setting

If you plan to install docker and operate it with non-root user, make sure you do the following BEFORE you install it (under Ubuntu Software)

1
2
sudo usermod -aG docker $USER
newgrp docker

Without the above steps it requires root privileges to run docker commands.

Don’t install visual studio code from Ubuntu Software

Instead, download and install the package from https://go.microsoft.com/fwlink/?LinkID=760868.
If you do install from Ubuntu Software (aka snap) you’ll end up with a very slow start-up problem with VSC. See the issue reported at https://github.com/Microsoft/vscode/issues/61565.

Get hardware sensor information

To get information such temperature about the laptop’s cpu/ssd, install Psensor from Ubuntu Software.

Unofficial benchmark results using redis

With docker/redis I made a quick benchmarking comparison between Manjaro Gnome (17) and Ubuntu (18.10) and I was blown away by the result from Ubuntu. With Manjaro I got about 60K/s operation while I am getting a whopping 150K+/s result from Ubuntu. I am not sure how the result can be so significantly different between Manjaro and Ubuntu. Below is the steps I took on both systems (I did these steps before and after Manjaro is replaced on my laptop).

1
2
3
docker pull redis
docker run --name my-redis --rm -d redis
docker exec -it my-redis redis-benchmark -q

Hardware configurations of this laptop:

  • CPU i7-8750H with 16G of RAM
  • 256G nvme drive (with Windows 10 installed)
  • 120G SSD (with Ubuntu 18.10 installed)

ditch moment.js for date-fns

I was a moment.js fan until I heard about date-fns. One of the biggest problem with moment.js is that date operations such as .subtract or .add mutates the object! Take the following code for example:

1
2
3
4
5
6
7
var now = new moment()
var prev = now.subtract(5, 'days')
// both prev and now are 5 days ago
// if you need to preserve 'now', you need to do something like
var prev = now.clone()
prev.subtract(5, 'days')
// prev will be 5 days ago but now remains the initial value

I was surprised in a great deal to find out such weird behavior through googling and bugs in my projects. To make things worse, moment’s doc dosn’t tell you clearly that calling .subtract() or .add() would mutate the original object.

Another issue working with moment.js is that Moment object is not a native javascript Date object, which means conversions are required, which in turn creates two new problems:

  1. Bad performance
  2. Not straigh-forward to work with because you will need to call .toDate() to convert Moment into Date, or moment() to convert Date (or date string) into Moment

date-fns is billed as the underscore for Dates. If you are a big fan of underscore (or lodash), you know what that means.

Tldr; here’s the date-fns version of the previous code snippet:

1
2
3
4
var subDays = require('date-fns/sub_days');	// note you can import only the function you need
var now = new Date();
var prev = subDays(now, 5);
// now remains the initial value

I have converted some of my existing projects into using date-fns and the experience has been awesome since I don’t have to live with the annoyance that moment.js brings.

Don’t just take my words, go give https://date-fns.org/ a try, or simply

1
npm i date-fns

if you are already convinced.

Update: It turns out I am not the only one who feels the need for the switch. Dan Abramov, the creator of redux, mentioned this in late 2016
https://twitter.com/dan_abramov/status/805030922785525760?lang=en

OHLC data grouping with mongodb

In this post I will demonstrate how to do data grouping with OHLC data using mongodb’s powerful aggregation framework.

The problem:

Group OHLC data every N weeks where N>1, I should point out that doing weekly data grouping (when N==1) is a whole lot easier.

Sample Data:

mongodb_aggregation_data.jsview raw
1
2
3
4
5
6
7
8
9
10
11
/* Data based on http://finance.yahoo.com/q/hp?s=QQQ+Historical+Prices */
> db.ohlc.find()
{ "_id" : ObjectId("54d65597daf0910dfa816995"), "S" : "QQQ", "D" : ISODate("2015-02-06T00:00:00Z"), "O" : 103.92, "H" : 104.17, "L" : 102.76, "C" : 103.13, "V" : 32833800, "A" : 103.13 }
{ "_id" : ObjectId("54d65597daf0910dfa816996"), "S" : "QQQ", "D" : ISODate("2015-02-05T00:00:00Z"), "O" : 103.13, "H" : 103.83, "L" : 102.87, "C" : 103.76, "V" : 23605500, "A" : 103.76 }
{ "_id" : ObjectId("54d65597daf0910dfa816997"), "S" : "QQQ", "D" : ISODate("2015-02-04T00:00:00Z"), "O" : 102.54, "H" : 103.55, "L" : 102.31, "C" : 102.87, "V" : 34073200, "A" : 102.87 }
{ "_id" : ObjectId("54d65597daf0910dfa816998"), "S" : "QQQ", "D" : ISODate("2015-02-03T00:00:00Z"), "O" : 102.33, "H" : 103.03, "L" : 101.68, "C" : 102.96, "V" : 30750400, "A" : 102.96 }
{ "_id" : ObjectId("54d65597daf0910dfa816999"), "S" : "QQQ", "D" : ISODate("2015-02-02T00:00:00Z"), "O" : 101.33, "H" : 102.07, "L" : 99.75, "C" : 101.98, "V" : 43624700, "A" : 101.98 }
{ "_id" : ObjectId("54d65597daf0910dfa81699a"), "S" : "QQQ", "D" : ISODate("2015-01-30T00:00:00Z"), "O" : 101.8, "H" : 102.58, "L" : 100.96, "C" : 101.1, "V" : 42927600, "A" : 101.1 }
{ "_id" : ObjectId("54d65597daf0910dfa81699b"), "S" : "QQQ", "D" : ISODate("2015-01-29T00:00:00Z"), "O" : 100.83, "H" : 102.08, "L" : 99.96, "C" : 101.89, "V" : 46539700, "A" : 101.89 }
{ "_id" : ObjectId("54d65597daf0910dfa81699c"), "S" : "QQQ", "D" : ISODate("2015-01-28T00:00:00Z"), "O" : 103.09, "H" : 103.18, "L" : 100.9, "C" : 100.92, "V" : 43591700, "A" : 100.92 }
/* ... */

The solution:

mongodb_aggregation.jsview raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
db.ohlc.aggregate({
$match: {
S: 'QQQ'
}
}, {
$project: {
D: '$D', O: '$O', H: '$H', L: '$H', C: '$C', V: '$V', A: '$A',
weeknbr: {
$divide: [{
$subtract: [
'$D',
new ISODate('1970-01-04')
]
},
86400 * 7000
]
}
}
}, {
$project: {
D: '$D', O: '$O', H: '$H', L: '$H', C: '$C', V: '$V', A: '$A',
rnd_weeknbr: {
$subtract: [
'$weeknbr', {
$mod: [
'$weeknbr',
1
]
}
]
}
}
}, {
$project: {
D: '$D', O: '$O', H: '$H', L: '$H', C: '$C', V: '$V', A: '$A',
grp_weeknbr: {
$subtract: [
'$rnd_weeknbr', {
$mod: [
'$rnd_weeknbr',
4
]
}
]
}
}
}, {
$sort: {
D: 1
}
},

{
$group: {
_id: {
grp_weeknbr: '$grp_weeknbr'
},
D: {
$last: '$D'
},
O: {
$first: '$O'
},
H: {
$max: '$H'
},
L: {
$min: '$L'
},
C: {
$last: '$C'
},
A: {
$last: '$A'
},
V: {
$sum: '$V'
}
}
}, {
$sort: {
D: 1
}
}
)

The explanation:

The idea is to

  1. get the number of weeks (floating point) since a reference date (line #8-17, all OHLC data in the db are later than that date), the reason 1970-01-04 instead of 1970-01-01 (which is Wednesday)
    is chosen is because it lands on Sunday.
  2. get the floored number of weeks for step 2, line #22-31, since mongodb doesn’t have a round or floor method, this is the only way to get number of week since reference date in integer.
  3. sort by D, this step is crucial as the next step will need to use method such as $last and $first to get the close and open price for the grouped period.
  4. group by 4 weeks’ OHLC, line #37-40, number 4 on line #41 can be substituted based on data group unit.
  5. sort by _id, which consists of the calculated grouping number grp_weeknbr from previous step.

The reason why the number of weeks needs to be calculated based off a reference date is because of the partial week problem. See this SO question that I asked before I came up with the solution documented in this post.

Result:

mongodb_aggregation_result.jsview raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
/* 0 */
{
"result" : [
{
"_id" : {
"grp_weeknbr" : 1520
},
"D" : ISODate("1999-03-19T00:00:00.000Z"),
"O" : 102.25,
"H" : 106.5,
"L" : 102.31,
"C" : 102.44,
"A" : 46.98,
"V" : 50912800
},
{
"_id" : {
"grp_weeknbr" : 1524
},
"D" : ISODate("1999-04-16T00:00:00.000Z"),
"O" : 102.88,
"H" : 112.5,
"L" : 101,
"C" : 103.94,
"A" : 47.67,
"V" : 182968000
},
/* ... */
{
"_id" : {
"grp_weeknbr" : 2344
},
"D" : ISODate("2015-01-02T00:00:00.000Z"),
"O" : 105.11,
"H" : 105.57,
"L" : 102.09,
"C" : 102.94,
"A" : 102.94,
"V" : 695244900
},
{
"_id" : {
"grp_weeknbr" : 2348
},
"D" : ISODate("2015-01-30T00:00:00.000Z"),
"O" : 102.5,
"H" : 104.58,
"L" : 100.95,
"C" : 101.1,
"A" : 101.1,
"V" : 794977000
},
{
"_id" : {
"grp_weeknbr" : 2352
},
"D" : ISODate("2015-02-06T00:00:00.000Z"),
"O" : 101.33,
"H" : 104.17,
"L" : 102.07,
"C" : 103.13,
"A" : 103.13,
"V" : 164887600
}
],
"ok" : 1
}

Note: Last group (2015-02-06) contains only one-week (first week of the group) worth of data.

I just built my first standing desk

So it’s for real

I’ve always wanted to build a standing desk using Ikea parts. Other people have done it with Finnvard height adjustable table legs (http://www.ikea.com/us/en/catalog/products/00144763/) and a table top. My plan was put off because the Long Island Ikdea store discontinued the Finnvard legs for some time. I was lucky to find them in-store again this weekend and Voilà - my dream desk is finally built:

I bought this $40 stool just incase I need to take a short break from standing:
http://www.ikea.com/us/en/catalog/products/50199215/

I also like the fact that I am able to stuff my laser printer onto the shell:

Here’s the recipe

1 x Linnmon table top, http://www.ikea.com/us/en/catalog/products/50251350/, $40

2 x Finnvard table legs http://www.ikea.com/us/en/catalog/products/00144763/, $30 ea

1 x Tertial Work lamp, http://www.ikea.com/us/en/catalog/products/20370383/, $9

Total: $109+Tax

Note

I didn’t do some serious hack to increase the height of the table legs like this guy did http://www.ikeahackers.net/2014/02/convert-the-finnvard-into-a-height-adjustable-standing-desk.html because the legs are able to reach upto 36 5/8, with 1” for the table top, the final result is 37 5/8, which is pretty close to my ideal desk height (the height where elbows can rest on the table top).

Install Gitlab(6.4) on Raspberry PI

I am a big fan of both Raspberry PI and Gitlab so it kinda bugs me when my attempts to install Gitlab onto RPI didn’t succeed because of failure to install therubyracer gem. Others experienced the similar problems I encountered: http://www.raspberrypi.org/phpBB3/viewtopic.php?t=32716&p=397934, by following most of user dpenezic’s instruction I finally made it to install Gitlab (currently at version 6.4) onto my RPI (512MB Ram but only 384MB is available to the system as I allocate the rest to GPU). So here are what I did:

Steps

1) Follow https://github.com/gitlabhq/gitlabhq/blob/6-4-stable/doc/install/installation.md until “Install Gems”

2) Install libv8 (https://github.com/cowboyd/libv8)

# [update: added git-svn to the list on 1/17/2014]
sudo apt-get install -y subversion git-svn
[ -d ~/tmp ] || mkdir ~/tmp
cd ~/tmp
git clone https://github.com/cowboyd/libv8
cd libv8
bundle install
# be patient, the following command takes a while
bundle exec rake clean build binary
sudo gem install pkg/libv8-3.11.8.17-armv6l-linux.gem

3) Modify /home/git/gitlab/Gemfile (and .lock) to skip installation of libv8 (as it’s installed through the above step) and the rubyracer

cd /home/git/gitlab
sudo -u git -H editor Gemfile    # and remove the line: gem "therubyracer"
sudo -u git -H editor Gemfile.lock    # and removed the following lines
    libv8 (3.16.14.3)
    therubyracer (0.12.0)
      libv8 (~> 3.16.14.0)
      ref

4) Install node.js, you can take a look at the script I came up with to compile node.js in RPI: https://github.com/midnightcodr/rpi_node_install

5) Now you can resume the “Install Gems” step in the Gitlab installation guide

sudo -u git -H bundle install --deployment --without development test postgres aws
# and the rest

Notes

1) 384MB is not enough to run “Compile assets” step on the gitlab installation guide so I had to add some more swap memory by following http://www.cyberciti.biz/faq/linux-add-a-swap-file-howto/

sudo dd if=/dev/zero of=/swapfile1 bs=1024 count=524288
sudo mkswap /swapfile1
sudo chmod 0600 /swapfile1
sudo swapon /swapfile1

2) Make sure the server_name setting in /etc/nginx/sites-available/gitlab matches gitlab_url in /home/git/gitlab-shell/config.yml, also add an entry to your RPI’s /etc/hosts

127.0.0.1    gitlab.server.hostname

3) With the current version of Gitlab, performance is not that bad at all - it takes about 2 seconds to switch pages.