Created by Jaka Hudoklin / @offlinehacker
Fullstack software engeneer in javascript, python, c, nix and more, with experiences in web technologies, system provisioning, embedded devices and security.
http://gatehub.net: new fintech platform for multy currency payments, trading and exchange based on ripple
Data driven distributed task automation and data agregation framework using graph databases and docker containers and nix.
How to deploy:
Many more avalible here: http://stackoverflow.com/questions/18285212/how-to-scale-docker-containers-in-production
$ cat Dockerfile
FROM nixos/nix:1.10
RUN nix-channel --add http://nixos.org/releases/nixpkgs/nixpkgs-16.03pre71923.3087ef3/ dev
RUN nix-channel --update
RUN nix-env -iA dev.nginx
ADD nginx.conf nginx.conf
CMD nginx -c $PWD/nginx.conf -g 'daemon off;'
$ docker build -t offlinehacker/nginx .
$ docker run -ti -p 80:80 offlinehacker/nginx
You can run nixos inside docker containers using --privileged mode
... but, you don't want to do that
Working implementation of service abstraction layer is in progress, but currently not on my priority list
Distributed cluster manager for docker containers
$ cat nginx-controller.yaml apiVersion: v1 kind: ReplicationController metadata: name: nginx-controller spec: replicas: 2 # selector identifies the set of Pods that this # replication controller is responsible for managing selector: app: nginx # podTemplate defines the 'cookie cutter' used for creating # new pods when necessary template: metadata: labels: # Important: these labels need to match the selector above # The api server enforces this constraint. app: nginx spec: containers: - name: nginx image: offlinehacker/nginx ports: - containerPort: 80 $ kubectl create -f nginx-controller.yaml
$ cat nginx-service.yaml apiVersion: v1 kind: Service metadata: name: nginx-service spec: ports: - port: 8000 # the port that this service should serve on # the container on each pod to connect to, can be a name # (e.g. 'www') or a number (e.g. 80) targetPort: 80 protocol: TCP # just like the selector in the replication controller, # but this time it identifies the set of pods to load balance # traffic to. selector: app: nginx $ kubectl create -f nginx-service.yaml
$ kubectl get nodes
$ kubectl get pods
$ kubectl logs nginx-controller-c5sik
$ kubectl exec -t -i -p nginx-controller-c5sik -c nginx -- sh
There is a nixos module created and maintained by me
It's deployed in production for longer period of time, till now without any bigger issues
bridges = {
cbr0.interfaces = [ ];
};
interface = {
cbr0 = {
ipAddress = "10.10.0.1";
prefixLength = 24;
};
};
services.kubernetes.roles = ["master" "node"];
virtualisation.docker.extraOptions =
''--iptables=false --ip-masq=false -b cbr0'';
This enables kubernetes with all the services you need for single node kubernetes
For production environments we need at least a cluster of three machines of which one of the machines is master
All machines need to be connected and have routable subnets
I have developed a set of reusable nixos profiles that you can simply include in your configuration. They are avalible on https://github.com/offlinehacker/x-truder.net
... they still need a better documentationMy social media and sites: