Jekyll2019-02-27T20:02:49+00:00https://danielparker.me/feed.xmlDaniel Parker’s blogDanny's personal tech blog.Daniel Parkerdcparker88@gmail.comSwitching to HAProxy from Nginx2019-02-21T06:00:00+00:002019-02-21T06:00:00+00:00https://danielparker.me/haproxy/nginx/comparison/nginx-vs-haproxy<h1 id="overview">Overview</h1>
<p>I’m currently in the process of switching my team’s load balancers from Nginx to HAProxy. I mentioned it briefly in <a href="https://danielparker.me/haproxy/consul/srv/haproxy-srv-consul/">this blog post,</a> but I wanted to expand on some of my reasoning a bit more. Again, this isn’t meant as a post bashing Nginx. I have had great success with Nginx and we still use it in certain areas. This is more of a post around the features HAProxy has that were compelling enough for me to switch.</p>
<p>For context, I want to talk a little bit about our architecture. We use Fastly as our CDN, and Fastly is the entry-point to our stack. From Fastly, we hit our edge load balancers (currently Nginx, soon to be HAProxy.) From the edge load balancers, we route to any number of backend microservices (depending on the environment, there could be hundreds.) Essentially, our edge load balancers make the decision on what backend to send the traffic to based on the route (like <code class="highlighter-rouge">/checkout/v2</code>) and handle the load balancing. They also encrypt everything with SSL and act as a line of defense against malicious calls.</p>
<h1 id="haproxy-advantages">HAProxy Advantages</h1>
<p>There are 3 main features that HAProxy has that Nginx doesn’t (in the community version, at least) that got me thinking about making a switch. I will go through each of these one at a time.</p>
<h2 id="upstream-health-check-support">Upstream health check support</h2>
<p>The first feature that is important to me is upstream health checks. Nginx has a health check feature in Nginx Plus, but that isn’t an option for me. Nginx can also retry when the upstream call fails, but there isn’t really a proactive way for Nginx to check the backend health without using Nginx Plus. HAProxy <em>does</em> have health check support, in fact, it supports TCP and HTTP health checks.</p>
<p>The docs offer multiple options for configuring, but we usually set our health checks directly in our HAProxy backend.</p>
<h3 id="http-check">HTTP check</h3>
<p>In our backend, you can define an HTTP check like this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>backend checkout-v2
mode http
balance roundrobin
option httpchk GET /carts/v4/actuator/health HTTP/1.1\r\nHost:\ haproxy
server-template checkout-v2 10 checkout-v2.service.consul:8080 resolvers myresolver resolve-prefer ipv4
</code></pre></div></div>
<p>The specific line we care about is <code class="highlighter-rouge">option httpchk GET /checkout/v2/health HTTP/1.1\r\nHost:\ haproxy</code>. This line tells HAProxy to call our backend with a request to <code class="highlighter-rouge">/checkout/v2/health</code> (with the request host as “haproxy”.) This will proactively check for a 200 status code, and will mark the backend down immediately if the request fails. This is a great way to proactively remove an unhealthy backend. It also allows us to configure our health endpoint with additional checks - like waiting until the internal cache is warm or ensuring we can connect to our database before being marked as healthy.</p>
<h3 id="tcp-check">TCP check</h3>
<p>HAProxy also allows standard TCP checks. These checks are less flexible than HTTP - it can only tell you if the IP:Port combination is open and listening - but it’s a good start if you don’t have/aren’t ready for HTTP health checks.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>backend checkout-v2
mode http
balance roundrobin
server-template checkout-v2 10 checkout-v2.service.consul:8080 check port 8080 resolvers myresolver resolve-prefer ipv4
</code></pre></div></div>
<p>The important part of this stanza is <code class="highlighter-rouge">check port 8080</code>. This tells HAProxy to check port 8080 on each backend IP periodically and ensure it’s listening before routing traffic.</p>
<hr />
<p>There are many more options in the HAProxy documentation. Check out the <a href="https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#4.2-tcp-check%20connect">TCP check options</a> or the <a href="https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#option%20httpchk">HTTP check options.</a></p>
<h2 id="full-metrics-support">Full Metrics Support</h2>
<p>Metrics have become very important for everyone to diagnose problems and understand what is happening with your infrastructure. Nginx, unfortunately, does not have the same level of metrics support as HAProxy. Nginx (outside of Nginx Plus) only offers <a href="http://nginx.org/en/docs/http/ngx_http_stub_status_module.html">basic stats</a> out of the box. In the past, we actually wrote some scripts to parse the Nginx logs and convert them to metrics (like # of 200 status codes, latency, etc.) - but this become unmanageable as our traffic levels grew. HAProxy has full metrics support in the free version. Here is a quick overview of the <a href="https://cbonte.github.io/haproxy-dconv/1.9/management.html#9.3-show%20stat">available metrics.</a></p>
<p>We were able to point Telegraf at the HAProxy metrics socket and easily get useful information about our frontends and backends. Datadog has a great post on getting everything set up with HAProxy and starting to collect metrics, I recommend reading <a href="https://www.datadoghq.com/blog/how-to-collect-haproxy-metrics/">this.</a></p>
<h2 id="dns-srv-record-support">DNS SRV Record Support</h2>
<p>At Target, we heavily use <a href="https://www.consul.io/">Consul</a> as our service registry. One of the benefits of Consul is that every service registered to it can be addressed using DNS. This includes <a href="https://en.wikipedia.org/wiki/SRV_record">SRV records</a>, which also have the port. We’ve started to use <a href="https://www.hashicorp.com/resources/nomad-scaling-target-microservices-across-cloud">Nomad</a> as a scheduler - so the ports of our microservices are dynamic. In order to keep our routing as simple and as up-to-date as possible, I wanted to use these SRV records in our load balancers.</p>
<p>Nginx does not support SRV in the community version. It only supports SRV records in Nginx Plus. HAProxy, however, does support SRV records. I wrote a post a few months ago on <a href="https://danielparker.me/haproxy/consul/srv/haproxy-srv-consul/">configuring HAProxy to work with SRV records.</a> This was hugely beneficial to us as we could very easily integrate Nomad and it’s dynamic ports with our frontend load balancers.</p>
<h1 id="conclusion">Conclusion</h1>
<p>Nginx has treated me very well in the past, but I’m currently switching most of my workloads over to HAProxy. There are still places where I use Nginx, and probably always will. The features listed above were too good to pass up, so I’ve started using HAProxy in the majority of my deployments. I look forward to continuing to learn all the capabilities of HAProxy.</p>Daniel Parkerdcparker88@gmail.comOverview I’m currently in the process of switching my team’s load balancers from Nginx to HAProxy. I mentioned it briefly in this blog post, but I wanted to expand on some of my reasoning a bit more. Again, this isn’t meant as a post bashing Nginx. I have had great success with Nginx and we still use it in certain areas. This is more of a post around the features HAProxy has that were compelling enough for me to switch.Simple Blue/Green Deployments with Nomad and HAProxy2019-02-21T06:00:00+00:002019-02-21T06:00:00+00:00https://danielparker.me/haproxy/blue-green/deployments/canary/nomad/simple-blue-green-haproxy<h1 id="overview">Overview</h1>
<p>I’ve recently started deploying HAProxy to replace Nginx for most of our application load balancing. You can read more about my decision to switch from Nginx to HAProxy <a href="">in this blog post.</a> One reason I am switching is because of DNS SRV record support, brought on by our use of <a href="https://www.hashicorp.com/resources/nomad-scaling-target-microservices-across-cloud">Nomad at Target.</a> Another feature Nomad gives us is blue/green and canary deployments. I needed to figure out how to integrate these features with our edge load balancer - HAProxy.</p>
<h2 id="nomad">Nomad</h2>
<p>Nomad gives us the ability to do blue/green and canary <a href="https://www.nomadproject.io/guides/operating-a-job/update-strategies/blue-green-and-canary-deployments.html">deployments.</a> Nomad differentiates “live” traffic from “canary” (or blue/green) by using Consul tags. For example, we may have 4 microservices deployed that are active. These would have a <code class="highlighter-rouge">live</code> <a href="https://www.consul.io/docs/agent/services.html">tag</a> in Consul. If we deployed a canary, a 5th microservice would be deployed with a <code class="highlighter-rouge">canary</code> tag. You can see this configuration in our Nomad job file:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>job "canary-deployments" {
type = "service"
update {
stagger = "30s"
max_parallel = 1
min_healthy_time = "120s"
canary = 1
}
...
service {
port = "http"
name = "canary-deployments"
tags = [
"live"
]
canary_tags = [
"canary"
]
}
</code></pre></div></div>
<p>The fist piece <code class="highlighter-rouge">canary = 1</code> tells Nomad to enable canary deployments. The second, <code class="highlighter-rouge">tags = [ "live" ]</code>, tags everything currently running with the <code class="highlighter-rouge">live</code> tag. The last, <code class="highlighter-rouge">canary_tags = [ "canary" ]</code>, tags any ongoing canary deployment with the <code class="highlighter-rouge">canary</code> tag. These tags are important, as they now allow us to route specific requests to the proper backend using HAProxy.</p>
<h2 id="haproxy">HAProxy</h2>
<p>Now we want to set up HAProxy to properly route us to the backend we expect. In the simplest configuration, we’ll have 2 backends: the live backend, and the canary backend. Let’s take a peek at how that looks:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>frontend http-in
mode http
bind *:80
backend api-v1
mode http
server-template api-v1 10 live.api-v1.service.consul check resolvers myresolver resolve-prefer ipv4
backend api-v1-canary
mode http
reqrep ^([^\ :]*)\ /canary/(.*) \1\ /\2
server-template api-v1-canary 10 canary.api-v1.service.consul resolvers myresolver resolve-prefer ipv4
</code></pre></div></div>
<p>This sets up 3 things. A frontend in HAProxy that is listening for traffic on port 80. It also sets up 2 backends we can route traffic to - <code class="highlighter-rouge">api-v1</code> and <code class="highlighter-rouge">api-v1-canary</code>. Since we’re using Consul DNS records to generate the backend list, we can also use Consul tags in the URL.</p>
<p>Backend <code class="highlighter-rouge">api-v1</code> will only find services registered with the <code class="highlighter-rouge">live</code> tag, and <code class="highlighter-rouge">api-v1-canary</code> will only find backends with the <code class="highlighter-rouge">canary</code> tag. We also add an option, <code class="highlighter-rouge">reqrep ^([^\ :]*)\ /canary/(.*) \1\ /\2</code> - this will strip off the <code class="highlighter-rouge">/canary/</code> (we’ll cover this later) so we don’t pass it to our backend. That way we don’t have to tell our APIs to look for a <code class="highlighter-rouge">/canary/</code> path - to the API all requests are the same.</p>
<h3 id="routing-based-on-request-path">Routing Based On Request Path</h3>
<p>So now, let’s talk about routing to these backends. The first option is to route based on the request path. We can have the following config:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>acl url_api-v1 path_beg /api/v1/
use_backend api-v1 if url_api-v1
acl url_api-v1-canary path_beg /canary/api/v1/
use_backend api-v1-canary if url_api-v1-canary
</code></pre></div></div>
<p>This is fairly simple - it checks the URL for a path match, and sends it to the proper backend. So, if we made the call <code class="highlighter-rouge">http://haproxy/api/vi</code> it would route to the active backend, <code class="highlighter-rouge">api-v1</code>. Alternatively, if we made the call <code class="highlighter-rouge">http://haproxy/canary/api/v1</code> it would route to the canary backend <code class="highlighter-rouge">api-v1-canary</code>, allowing us to test the new version we’re deploying before it goes live.</p>
<h3 id="routing-based-on-a-header">Routing Based On A Header</h3>
<p>We can also make routing decisions based on a request header. This is nice because we don’t have to change the paths of the API or worry about stripping off extra paths, but can require a larger code change than just changing a URL. Let’s look at a simple example:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>acl hdr_api-v1 hdr_val(My-Custom-Header) eq api-v1
use_backend api-v1 if hdr_api-v1
acl hdr_api-v1-canary hdr_val(My-Custom-Header) eq api-v1-canary
use_backend api-v1-canary if hdr_api-v1-canary
</code></pre></div></div>
<p>This tells HAProxy to check for the value of the header <code class="highlighter-rouge">My-Custom-Header</code>. If the value is <code class="highlighter-rouge">api-v1</code> it will send you to that backend. If the value is <code class="highlighter-rouge">api-v1-canary</code> it will send you to the canary backend. This is a very simple example - check the HAProxy <a href="https://www.haproxy.com/blog/introduction-to-haproxy-acls/">ACL documentation for more.</a></p>
<h1 id="conclusion">Conclusion</h1>
<p>This is a simple way to set up blue/green and canary deployments using Nomad and HAProxy. I am continuing to learn and find new ways to manage traffic with HAProxy, so I will continue to post options!</p>Daniel Parkerdcparker88@gmail.comOverview I’ve recently started deploying HAProxy to replace Nginx for most of our application load balancing. You can read more about my decision to switch from Nginx to HAProxy in this blog post. One reason I am switching is because of DNS SRV record support, brought on by our use of Nomad at Target. Another feature Nomad gives us is blue/green and canary deployments. I needed to figure out how to integrate these features with our edge load balancer - HAProxy.Nomad at Target: Scaling our Microservices Across the Public and Private Cloud2018-12-05T06:00:00+00:002018-12-05T06:00:00+00:00https://danielparker.me/nomad/hashiconf/nomad-hashiconf-talk<p>I was recently at HashiConf 2018 and gave a presentation on Nomad. Specifically, how we’re using Nomad on my team at Target. Check it out <a href="https://youtu.be/ywQHBuc0OL4">here.</a></p>Daniel Parkerdcparker88@gmail.comI was recently at HashiConf 2018 and gave a presentation on Nomad. Specifically, how we’re using Nomad on my team at Target. Check it out here.Load Balancing Using Consul, SRV Records, and HAProxy2018-09-21T06:00:00+00:002018-09-21T06:00:00+00:00https://danielparker.me/haproxy/consul/srv/haproxy-srv-consul<h1 id="overview">Overview</h1>
<p>A few months ago, I wrote a <a href="https://danielparker.me/nginx/consul-template/consul/nginx-consul-template/">blog post</a> about using Nginx + Consul in order to do dynamic routing and service discovery. Recently, I’ve started to explore the possibility of replacing Nginx with <a href="http://www.haproxy.org/">HAProxy.</a> There are a few important reasons I wanted to use HAProxy rather than Nginx, and I want to cover those quickly before covering the main topic - using SRC records for routing and load balancing.</p>
<h2 id="nginx-vs-haproxy">Nginx vs HAProxy</h2>
<p>Nginx has treated me very well. I’ve used Nginx in some form since 2012. This isn’t meant to bash Nginx. Recently however, I’ve been needing/wanting some additional features. Nginx has most of these features, but unfortunately only in the Nginx Plus paid version. Let’s take a look at the features:</p>
<ul>
<li>Upstream health check support
<ul>
<li>Nginx doesn’t allow this in the free version</li>
<li>HAProxy can continuously health check its upstreams with a configurable URL to ensure they are still healthy</li>
</ul>
</li>
<li>Full metrics support
<ul>
<li><a href="https://www.datadoghq.com/blog/how-to-collect-nginx-metrics/">Nginx Status Page</a> is the only metrics available (without writing your own LUA or using someone else’s)</li>
<li>HAProxy supports a <a href="https://www.datadoghq.com/blog/how-to-collect-haproxy-metrics/">much fuller suite</a> of metrics and even a simple UI</li>
</ul>
</li>
<li><a href="https://en.wikipedia.org/wiki/SRV_record">SRV record</a> support
<ul>
<li>only available in Nginx Plus</li>
<li>available in HAProxy</li>
</ul>
</li>
</ul>
<p>These are the 3 main features that are currently driving my switch to HAProxy from Nginx. In this blog post - I’d like to focus on the last feature, SRV record support.</p>
<h1 id="srv-records">SRV Records</h1>
<p>SRV records are DNS records that contain more information about the host. Namely, the IP and the port are included. There also isn’t really a limit on how many records can be returned through an SRV lookup - so if you have 50 backends, all 50 will be returned.</p>
<p>I’ve been looking in to using SRV records for a few years. One of the main drivers for this is the fact that <a href="https://www.consul.io/docs/agent/dns.html#rfc-2782-lookup">Consul</a> supports SRV records. At my day job, we use Consul as our primary service discovery mechanism. This means every API, every database, everything is registered to Consul and can be discovered through DNS.</p>
<h2 id="consul-template">Consul-Template</h2>
<p>In our Nginx world, we use <a href="https://github.com/hashicorp/consul-template">Consul-Template</a> to generate our Nginx configuration dynamically. This works well - and we use it in production. However, there are a few downsides:</p>
<ul>
<li>consul-template sometimes fails, and servers get out of sync</li>
<li>consul-template must directly reload nginx for changes to take affect</li>
<li>consul-template makes it harder to be flexible
<ul>
<li>because we’re discovering 50+ services, loops are our friend - but this makes it hard to have unique config requirements between backends</li>
</ul>
</li>
</ul>
<p>So - for the reasons above I started researching SRV records in HAProxy. Let’s take a look at some examples.</p>
<h1 id="haproxy-srv-examples">HAProxy SRV Examples</h1>
<p>Let’s take a look at how to set up HAProxy to work with SRV records. The first requirement is to have a service registered to Consul. Let’s take a quick look at setting up Consul:</p>
<h2 id="consul-configuration">Consul configuration</h2>
<p>I won’t go in to installing Consul, it’s <a href="https://www.consul.io/intro/getting-started/install.html">well-documented</a> on the Hashicorp page. However, let’s take a quick look at the <a href="https://www.consul.io/docs/agent/services.html">service</a> JSON we want to discover:</p>
<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="s2">"service"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="s2">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ordering-service"</span><span class="p">,</span><span class="w">
</span><span class="s2">"id"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ordering-service"</span><span class="p">,</span><span class="w">
</span><span class="s2">"port"</span><span class="p">:</span><span class="w"> </span><span class="mi">9099</span><span class="p">,</span><span class="w">
</span><span class="s2">"tags"</span><span class="p">:</span><span class="w"> </span><span class="p">[]</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>
<p>This is a very simple example - but it allows us to do the following:</p>
<ul>
<li>register a service to consul with the name <code class="highlighter-rouge">ordering-service</code></li>
<li>register a port for that service, port <code class="highlighter-rouge">9099</code></li>
</ul>
<p>We’re now ready to connect HAProxy to SRV records.</p>
<h2 id="haproxy-configuration">HAProxy configuration</h2>
<p>I won’t go over installing HAProxy, it can be done through an RPM, or by following the <a href="http://cbonte.github.io/haproxy-dconv/1.8/intro.html#3.6">docs.</a></p>
<p>HAProxy is configured through an <code class="highlighter-rouge">haproxy.cfg</code> file. For the purpose of this blog, this file will contain our entire config.</p>
<p>I’m not going to post the full <code class="highlighter-rouge">haproxy.cfg</code> file for the sake of brevity, you can find examples <a href="https://www.haproxy.org/download/1.8/doc/configuration.txt#2.5">online</a> and the RPM comes with an example.</p>
<p>The first part of the SRV configuration is the DNS resolver. We have to configure HAProxy to use Consul as its DNS server. The config looks like this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>resolvers consul
nameserver consul $consul-client-ip:8600
resolve_retries 30
timeout retry 2s
hold valid 120s
accepted_payload_size 8192
</code></pre></div></div>
<p>This section sets up a DNS resolver named <code class="highlighter-rouge">consul</code> that we can use later on.</p>
<p>Next, we configure our backend. We’ll use the <code class="highlighter-rouge">ordering-service</code> we set up earlier as our backend.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>backend ordering-service
mode http
balance leastconn
option httpchk GET /ordering-service/health HTTP/1.1\r\nHost:\ haproxy
server-template ordering-service 50 _ordering-service._tcp.service.consul check resolvers consul
</code></pre></div></div>
<p>Let’s drill in a bit on these configuration options.</p>
<ul>
<li><code class="highlighter-rouge">mode http</code> just tells it to use http rather than tcp to connect</li>
<li><code class="highlighter-rouge">option httpchk...</code> sets up a health check. This tells HAProxy to continously hit a health endpoint to ensure it’s healthy.</li>
<li><code class="highlighter-rouge">server-template...</code> sets up the SRV record.
<ul>
<li><code class="highlighter-rouge"> _ordering-service._tcp.service.consul</code> is the SRV record. It must start with an underscore <code class="highlighter-rouge">_</code>, and the <code class="highlighter-rouge">_tcp</code> tells it to just pull all records. That can also be replaced with a tag from the Consul service.</li>
<li><code class="highlighter-rouge">resolvers consul</code> tells it to use the consul resolver we set up previously.</li>
</ul>
</li>
</ul>
<p>This will allow HAProxy to automatically discover any server IP that is registered to Consul with the <code class="highlighter-rouge">ordering-service</code> ID. It also sends traffic to the port in the service, in our case <code class="highlighter-rouge">9099</code>. This is very useful since we can use dynamic ports in our services without having to list them in the HAProxy config file. This works really well using something like <a href="/nomad/hashicorp/schedulers/nomad/">Nomad</a> too, since every service will bind to a dynamic port.</p>
<h1 id="conclusion">Conclusion</h1>
<p>That should be it - HAProxy should now find the service using the SRV revord. It will also continuously check the DNS record for any additions or subtractions to the service pool. You can add more servers with the <code class="highlighter-rouge">ordering-service</code> service configured, and HAProxy will automatically start to load balance across them.</p>Daniel Parkerdcparker88@gmail.comOverview A few months ago, I wrote a blog post about using Nginx + Consul in order to do dynamic routing and service discovery. Recently, I’ve started to explore the possibility of replacing Nginx with HAProxy. There are a few important reasons I wanted to use HAProxy rather than Nginx, and I want to cover those quickly before covering the main topic - using SRC records for routing and load balancing.Running Cassandra in Kubernetes Across 1,800 Stores2018-08-08T06:00:00+00:002018-08-08T06:00:00+00:00https://danielparker.me/cassandra/kubernetes/target-tech-cassandra-blog<p>I wrote a blog post for Target’s public facing blog today. <a href="https://tech.target.com/2018/08/08/running-cassandra-in-kubernetes-across-1800-stores.html">Check it out</a> and let me know if you have any questions.</p>Daniel Parkerdcparker88@gmail.comI wrote a blog post for Target’s public facing blog today. Check it out and let me know if you have any questions.Escaping code in code block with Jekyll2018-03-10T06:00:00+00:002018-03-10T06:00:00+00:00https://danielparker.me/liquid/jekyll/til/liquid-escape-til<h1 id="overview">Overview</h1>
<p>Today, when writing a post, I ran in to an interesting error. I was getting Liquid errors while running my blog locally, from a code block in a post.</p>
<figure class="highlight"><pre><code class="language-text" data-lang="text">Liquid Warning: Liquid syntax error (line 113): Expected end_of_string but found id
Liquid Warning: Liquid syntax error (line 114): Expected end_of_string but found id</code></pre></figure>
<p>I was having a hard time figuring out how to tell Jekyll this was just code in a code block. Finally, I found out you can escape the code using <code class="highlighter-rouge">raw</code>:</p>
<figure class="highlight"><pre><code class="language-text" data-lang="text">{% raw %}</code></pre></figure>
<p>For example:</p>
<figure class="highlight"><pre><code class="language-text" data-lang="text">{% raw %}
Some code I want to escape
{\% endraw \%}</code></pre></figure>
<p>I still had to put the <code class="highlighter-rouge">\</code> in the block above to get it to escape the example, so I am still learning as well.</p>Daniel Parkerdcparker88@gmail.comOverview Today, when writing a post, I ran in to an interesting error. I was getting Liquid errors while running my blog locally, from a code block in a post.Generating dynamic config with Nginx and Consul-Template2018-03-05T06:00:00+00:002018-03-05T06:00:00+00:00https://danielparker.me/nginx/consul-template/consul/nginx-consul-template<h1 id="overview">Overview</h1>
<p>In my day job, we heavily use Nginx for our edge web servers. These servers route traffic to many different microservices. This means the Nginx config can become complex - with many different Nginx <a href="http://nginx.org/en/docs/http/ngx_http_core_module.html#location">locations</a> and <a href="http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream">upstreams.</a> Recently, we started registering all of these services in <a href="https://www.consul.io">Consul.</a> This gives us many benefits - one of which is the ability to use <a href="https://github.com/hashicorp/consul-template">consul-template.</a> This tool allows us to dynamically generate code based on the services registered inside of Consul. This means we can automatically add servers to Nginx as new ones come online - and automatically remove them as we scale down or a service fails. I wanted to give some examples around how we accomplished this, and what our configuration looks like.</p>
<h2 id="background">Background</h2>
<p>First, let’s start with the Nginx configuration. I won’t get in to the full Nginx config - if anyone is interested I can put that in another post. For now, let’s just focus on the location and upstream block of Nginx.</p>
<p>Each of our microservices has a unique route - usually something like <code class="highlighter-rouge">/$api-name/$version</code>. This means the Nginx location block needs to know the unique route for each microservice it’s responsible for routing. Each location block has a corresponding upstream block. This upstream block needs to know every server that exists that is servicing traffic for the API matching the location. So for high-traffic APIs, we may have 50 or more servers in the upstream block, each serving traffic. Let’s take a look at what each of these blocks looks like:</p>
<p><code class="highlighter-rouge">nginx-locations.conf</code></p>
<figure class="highlight"><pre><code class="language-nginx" data-lang="nginx"><span class="k">location</span> <span class="n">/dwave-scheduler</span> <span class="p">{</span>
<span class="kn">proxy_pass</span> <span class="s">http://dwave-scheduler-pool/dwave-scheduler</span><span class="p">;</span>
<span class="kn">proxy_http_version</span> <span class="mi">1</span><span class="s">.1</span><span class="p">;</span>
<span class="kn">proxy_set_header</span> <span class="s">Connection</span> <span class="s">""</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">location</span> <span class="n">/order-api</span> <span class="p">{</span>
<span class="kn">proxy_pass</span> <span class="s">http://order-api-pool/order-api</span><span class="p">;</span>
<span class="kn">proxy_http_version</span> <span class="mi">1</span><span class="s">.1</span><span class="p">;</span>
<span class="kn">proxy_set_header</span> <span class="s">Connection</span> <span class="s">""</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">location</span> <span class="n">/order-transfer</span> <span class="p">{</span>
<span class="kn">proxy_pass</span> <span class="s">http://order-transfer-pool/order-transfer</span><span class="p">;</span>
<span class="kn">proxy_http_version</span> <span class="mi">1</span><span class="s">.1</span><span class="p">;</span>
<span class="kn">proxy_set_header</span> <span class="s">Connection</span> <span class="s">""</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">location</span> <span class="n">/session/session</span> <span class="p">{</span>
<span class="kn">proxy_pass</span> <span class="s">http://session-session-pool/session/session</span><span class="p">;</span>
<span class="kn">proxy_http_version</span> <span class="mi">1</span><span class="s">.1</span><span class="p">;</span>
<span class="kn">proxy_set_header</span> <span class="s">Connection</span> <span class="s">""</span><span class="p">;</span>
<span class="p">}</span></code></pre></figure>
<p>As you can see above, there are 4 distinct APIs. Each has a “pool” name (the upstream block below) and a unique route. Let’s take a look at the upstreams:</p>
<p><code class="highlighter-rouge">nginx-upstreams.conf</code></p>
<figure class="highlight"><pre><code class="language-nginx" data-lang="nginx"><span class="k">upstream</span> <span class="s">dwave-scheduler-pool</span> <span class="p">{</span>
<span class="kn">least_conn</span><span class="p">;</span>
<span class="kn">keepalive</span> <span class="mi">32</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.1</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.2</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.3</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">upstream</span> <span class="s">order-api-pool</span> <span class="p">{</span>
<span class="kn">least_conn</span><span class="p">;</span>
<span class="kn">keepalive</span> <span class="mi">32</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.4</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.5</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.6</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.7</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.8</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.9</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">upstream</span> <span class="s">order-transfer-pool</span> <span class="p">{</span>
<span class="kn">least_conn</span><span class="p">;</span>
<span class="kn">keepalive</span> <span class="mi">32</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.10</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.11</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.12</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.13</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.14</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.15</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.16</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">upstream</span> <span class="s">session-session-pool</span> <span class="p">{</span>
<span class="kn">least_conn</span><span class="p">;</span>
<span class="kn">keepalive</span> <span class="mi">32</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.17</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span>
<span class="kn">server</span> <span class="nf">172.22.0.18</span><span class="p">:</span><span class="mi">8080</span><span class="p">;</span></code></pre></figure>
<p>Each unique API has a set of servers that are serving traffic for that API. This can change any time - maybe we need to scale up to handle more traffic, or maybe one of these servers crashes.</p>
<p>In the past - this would have been configured statically. We would have to manually go and update it (or let something like Chef do it) each time a server changed. It also wasn’t all that resilient - if one of the servers above crashed, it could take 30 minutes or more for Chef to update the config, and that’s if Chef actually knew the API wasn’t healthy. Enter consul-template.</p>
<h2 id="consul-template">Consul-Template</h2>
<p>As our service catalog grew, and we started deploying more and more frequently, we realized we needed this Nginx configuration to become more dynamically. Since we were already moving to register all of our services in Consul, Consul-Template seemed like a natural fit. I won’t go too far in to what consul-template is - the Hashicorp website does a great job of that. Basically - consul-template allows us to dynamically generate configuration files from services registered to Consul. We now have a consul template file <code class="highlighter-rouge">.ctmpl</code> for each of our config files - in this case, one for the <code class="highlighter-rouge">locations</code> block and one for the <code class="highlighter-rouge">upstreams</code> block. Let’s take a look at each one:</p>
<p><code class="highlighter-rouge">nginx-locations.ctmpl</code></p>
<figure class="highlight"><pre><code class="language-text" data-lang="text"> {{- range services -}}
{{- if in .Tags "nginx-route" -}}
{{- $boxes := service .Name }}
{{- if gt (len $boxes) 0 -}}
location /{{.Name | replaceAll "--" "/"}} {
proxy_pass https://{{.Name | replaceAll "--" "-"}}-pool/{{.Name | replaceAll "--" "/"}};
proxy_http_version 1.1;
proxy_set_header Connection "";
}
{{- end -}}
{{- end -}}
{{- end -}}
</code></pre></figure>
<p>Let’s step through that. The first block tells consul-template to get a list of all the services registered in Consul. We then filter by a specific tag, <code class="highlighter-rouge">nginx-route</code>, so we know it’s a service we specifically want to add to nginx. This means any API registering with Consul that wants to be routed through Nginx needs this tag. The next 2 lines are a quick check to make sure the service actually has healthy boxes - if it doesn’t, Nginx will complain that there is an upstream block with no hosts in it.</p>
<p>The next part is a little confusing. We set up the unique route by using the service name, and replacing any <code class="highlighter-rouge">--</code> with a <code class="highlighter-rouge">/</code>. This allows us to dynamically generate the route based on the service. If an API, say <code class="highlighter-rouge">/order-api/v2</code> wanted to be in Nginx, it would register itself with the name <code class="highlighter-rouge">order-api--v2</code> and the tag <code class="highlighter-rouge">nginx-route</code>. Nginx would then know to route this API, and route it to the route <code class="highlighter-rouge">/order-api/v2</code>.</p>
<p>The next part is how we link the upstream block to the location block. We keep the same naming convention - except we replace the <code class="highlighter-rouge">--</code> with a <code class="highlighter-rouge">-</code> for the pool name to make sure we don’t add an extra route. This matches the name in the upstream block - so anything coming to <code class="highlighter-rouge">/order-api/v2</code> will get routed to the <code class="highlighter-rouge">order-api-v2-pool</code>. Let’s take a look at the upstream block:</p>
<p><code class="highlighter-rouge">nginx-upstreams.ctmpl</code></p>
<figure class="highlight"><pre><code class="language-text" data-lang="text"> {{- range services -}}
{{- if in .Tags "nginx-route" -}}
{{- $boxes := service .Name }}
{{- if gt (len $boxes) 0 -}}
upstream {{.Name | replaceAll "--" "-"}}-pool {
least_conn;
keepalive 32;
{{- range service .Name }}
server {{.Address}}:{{.Port}};{{ end }}
}
{{- end -}}
{{- end -}}
{{- end -}}
</code></pre></figure>
<p>Very similar to the location block - except in this one we step through the specific service, and list out the address and port of each one. This means that any time a new box is added, or a <a href="https://www.consul.io/intro/getting-started/checks.html">consul health check</a> fails, it will automatically update Nginx with the new configuration file.</p>
<h1 id="conclusion">Conclusion</h1>
<p>We’ve been using this in production for the last few months, and it works very well. We now have the following benefits:</p>
<ul>
<li>automatic discovery of new microservices</li>
<li>automatic discovery of servers
<ul>
<li>failed servers are removed</li>
<li>added servers are added</li>
</ul>
</li>
</ul>
<p>I’m continuing to learn Nginx and Consul, so if anyone has any suggestions, let me know!</p>Daniel Parkerdcparker88@gmail.comOverview In my day job, we heavily use Nginx for our edge web servers. These servers route traffic to many different microservices. This means the Nginx config can become complex - with many different Nginx locations and upstreams. Recently, we started registering all of these services in Consul. This gives us many benefits - one of which is the ability to use consul-template. This tool allows us to dynamically generate code based on the services registered inside of Consul. This means we can automatically add servers to Nginx as new ones come online - and automatically remove them as we scale down or a service fails. I wanted to give some examples around how we accomplished this, and what our configuration looks like.Linux Command Line - Tips and Tricks2017-12-28T06:00:00+00:002017-12-28T06:00:00+00:00https://danielparker.me/linux/cli/til/tips/linux-cli<h1 id="overview">Overview</h1>
<p>I use a terminal pretty often in my daily job - either when logging in to servers or on my Macbook. In the spirit of learning, I decided to put together a document with some tips/tricks/shortcuts that I have learned over the years.</p>
<h2 id="working-with-your-history">Working with your history</h2>
<p>A bash shell keeps a history of every command you run in the terminal. Here are some commands I rely on daily:</p>
<p>You can view your history with <code class="highlighter-rouge">history</code>:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ history | grep git
12 git diff
31 cd git
70 git submodule update --init --recursive
72 git status
73 git add .
74 git status
77 vim .git/config
78 git status
79 rm -rf .git
80 git status
81 git init
</code></pre></div></div>
<p>Each line has a line number - this won’t change as long as your history is saved. If you want to run a command from your history, you can do something like: <code class="highlighter-rouge">!$line_number</code>:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ !80
git status
</code></pre></div></div>
<hr />
<p>One of my favorite commands (and it saves me the most time) is the reverse history search. You can search the most recent commands you have ran by pressing <code class="highlighter-rouge">ctrl + r</code>:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>(reverse-i-search)`':
</code></pre></div></div>
<p>Then you can start typing. Say you want to find the last <code class="highlighter-rouge">docker</code> command you ran, start typing ‘docker’:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>(reverse-i-search)`doc': docker kill 80486e6e1927
</code></pre></div></div>
<p>You’ll see, after I type ‘doc’ the last docker command appears. You can now scroll through all matches by pressing <code class="highlighter-rouge">ctrl + r</code> again. Once you find the command you want, you can hit enter to run it. You can also hit backspace if you typo a letter you don’t want to search.</p>
<h2 id="assorted-tips-and-tricks">Assorted tips and tricks</h2>
<p>Forget to sudo a command? Instead of typing it again, or scrolling back and typing sudo, simply run <code class="highlighter-rouge">sudo !!</code>:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>(1:1002)$ ls /etc/sensu/
ls: cannot open directory /etc/sensu/: Permission denied
(1:1002)$ sudo !!
sudo ls /etc/sensu/
conf.d config.json config.json.example extensions handlers
</code></pre></div></div>
<p>This will run the last command again, but with sudo applied.</p>
<hr />
<p>Go back to last directory you were in - did you <code class="highlighter-rouge">cd</code> to a directory and want to go back to where you were? Simply run <code class="highlighter-rouge">cd -</code></p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>(1:1003)$ cd /opt/chef
(1:1004)$ cd /tmp/
(1:1005)$ cd -
/opt/chef
(1:1006)$ pwd
/opt/chef
</code></pre></div></div>
<h2 id="git">Git</h2>
<p>I also use the Git command line pretty often. Git can be very confusing - especially when starting out. These commands are very basic - but they help me out almost daily.</p>
<p>Need to create a new branch quickly? Just use <code class="highlighter-rouge">checkout -b</code>:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git checkout -b new-branch
</code></pre></div></div>
<hr />
<p>Make changes to the wrong branch, and need to switch branches without losing your changes? You can just use <code class="highlighter-rouge">git stash</code></p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git diff
diff --git a/README.md b/README.md
index a058cd7..7ad7ca6 100644
--- a/README.md
+++ b/README.md
@@ -1,3 +1,4 @@
+Some code I don't want added on this branch.
</code></pre></div></div>
<p>Stashing saves the changes to your “stash”:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git stash
Saved working directory and index state WIP on new-changes: a7e7137 Merge pull request #102
</code></pre></div></div>
<p>Then you can switch branches, and apply the changes from your stash:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git checkout -b different-branch
Switched to a new branch 'different-branch'
f45c898eb299:website z077514$ git stash pop
On branch different-branch
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: README.md
no changes added to commit (use "git add" and/or "git commit -a")
</code></pre></div></div>
<p>One thing to note - <code class="highlighter-rouge">git stash pop</code> removes the code in your stash. If you want to apply the changes <em>and</em> keep your stash, just use <code class="highlighter-rouge">git stash apply</code>. This way you could theoretically apply your stashed changes to more branches.</p>Daniel Parkerdcparker88@gmail.comOverview I use a terminal pretty often in my daily job - either when logging in to servers or on my Macbook. In the spirit of learning, I decided to put together a document with some tips/tricks/shortcuts that I have learned over the years.Setting up a local P2Pool and mining Vertcoin with CCMiner2017-12-05T06:00:00+00:002017-12-05T06:00:00+00:00https://danielparker.me/cryptocurrency/vertcoin/mining/ccminer/vertcoin<h1 id="overview">Overview</h1>
<p>I’ve (along with everyone else recently) been fascinated with cryptocurrency recently, and have been learning more and more about it. One of the most interesting things to me is the act of mining the coins. I’ve played with mining in the past, but with all the new popularity, mining the well-known coins is basically impossible for the average person. There are many altcoins out there - almost too many to choose from. I’ve been doing some reading lately, and <a href="https://vertcoin.org/">Vertcoin</a> caught my eye. I recently spent some time setting up my own local P2Pool for Vertcoin, and pointing a miner at that local pool. This is meant to be a guide for:</p>
<ol>
<li>installing a wallet</li>
<li>creating your own P2Pool server</li>
<li>starting to mine using your pool</li>
</ol>
<p>Everything in this guide is specific to Windows 10, but it should be similar for other platforms. I’m using CCMiner since I have Nvidia graphics cards.</p>
<h2 id="benefits-of-running-your-own-p2pool">Benefits of running your own P2Pool</h2>
<p>There are many public Vertcoin pools out there for someone to use. The simplest mining setup would use a public pool, like <a href="https://vertcoin.easymine.online/">Vertcoin Easy Mine.</a> There are, however, some notable benefits to running your own:</p>
<ul>
<li>lower latency - since you’re on the same network (or even computer) as the pool, latency will go down, which means fewer dead on arrivals</li>
<li>0% fees - some (most) public pools charge a fee to use. A local pool will be free</li>
<li>zero downtime - since you control the pool, you can keep it running as long as you like</li>
<li>contribute to decentralization - Vertcoin (and other cryptocurrencies) need to be decentralized to survive</li>
</ul>
<p>Now that’s done, we’re ready for the first step.</p>
<h1 id="vertcoin-wallet">Vertcoin wallet</h1>
<p>The Vertcoin wallet is what holds all your coins, so make sure to password protect/encrypt/back it up. I won’t go in to detail about that here - there are other guides on it.</p>
<h2 id="installing-vertcoin-wallet">Installing Vertcoin wallet</h2>
<p>Before you can start mining or start your own pool, you need to install the Vertcoin wallet. You can find the wallet on the <a href="https://github.com/vertcoin/vertcoin/releases">Vertcoin github releases page.</a> Make sure you pick the proper OS. I set up my pool on a Windows machine, so I chose <code class="highlighter-rouge">vertcoin-v0.12.0-windows-64bit.zip</code>. Once it’s downloaded:</p>
<ul>
<li>unzip the file</li>
<li>open the <code class="highlighter-rouge">vertcoin-qt.exe</code></li>
<li>it should ask for you an install location, it’s ok to keep this all default</li>
<li>it should start syncing the blockchain (this will take approximately forever)</li>
</ul>
<p>Once the syncing is done, you should see a clean wallet like so:
<img src="/images/vertcoin-clean.png" alt="alt text" title="Vertcoin wallet after syncing" /></p>
<h2 id="create-a-receive-address">Create a receive address</h2>
<p>In order to receive the coins you mine, you’ll need to create a receive address. In the wallet, click the “receive” tab. You should see a form like:
<img src="/images/vertcoin-receive-form.png" alt="alt text" title="Vertcoin wallet receive form" /></p>
<p>You don’t have to fill any of these out, but I use one address for mining - that way I know where the transactions come from. I used these values:</p>
<ul>
<li>Label: “miner”</li>
<li>Amount: leave blank</li>
<li>Message: “received from mining”</li>
</ul>
<p>Then click the receive payment button - a box should appear with a QR code and address. Go ahead and close this for now - the information will be available later by navigating back to the receive tab.
<img src="/images/vertcoin-receive-address.png" alt="alt text" title="Vertcoin wallet receive address" /></p>
<h2 id="configure-the-miner-rpc-connections">Configure the miner rpc connections</h2>
<p>Next, we need to configure our wallet to listen for local rpc connections from a pool. The pool uses to wallet to communicate with the blockchain, so both must be running at the same time. In the wallet:</p>
<ul>
<li>click “Settings”
<ul>
<li>click “Options”</li>
</ul>
</li>
<li>click “Open Configuration File”</li>
</ul>
<p><img src="/images/vertcoin-config-file.png" alt="alt text" title="Vertcoin wallet open configuration file" /></p>
<p>This configuration file should open in something like Notepad, and it should be blank. Add the following text:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>server=1
rpcuser=myuser
rpcpassword=verystrongpasswordnobodycanguess
</code></pre></div></div>
<p>Save the file, and restart the Vertcoin wallet. We’re now ready to install P2Pool.</p>
<h1 id="installing-p2pool">Installing P2Pool</h1>
<p>The P2Pool binaries are located in the <a href="https://github.com/vertcoin/p2pool-vtc/releases">Github Releases section</a> once again. Download the latest release (in my case it was <code class="highlighter-rouge">v0.1.1</code>). You will need to unzip the file once it’s downloaded. Inside the unzipped folder, you should see a Windows batch file titled something like <code class="highlighter-rouge">Start P2Pool Network 2</code>. You can either choose to edit this file, or create another batch file inside this folder to start the pool. In the file, add this:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>run_p2pool.exe --net vertcoin2 myuser verystrongpasswordnobodycanguess $wallet_address -w $local_ip:port
</code></pre></div></div>
<p>More options are documented <a href="https://en.bitcoin.it/wiki/P2Pool#Option_Reference">here</a> if you need to tweak it even more.</p>
<p>$wallet_address will be the address you received earlier, when clicking “receive” in the wallet. <code class="highlighter-rouge">-w $local_ip:port</code> isn’t strictly needed - but I wanted to make sure it started up on the right IP address and used a port that I wanted (I used 9999 in my case to remember easily.) Save and close the file.</p>
<p>Now, run the batch file. You should see the pool start up with messages similar to:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>C:\Users\MINING\Downloads\p2pool-vtc-WIN64\p2pool-vtc>run_p2pool.exe --net vertcoin2 myuser verystrongpasswordnobodycanguess $wallet_address -w $local_ip:port
2017-12-14 16:57:00.523000 p2pool (version 3d0f826-dirty)
2017-12-14 16:57:00.524000
2017-12-14 16:57:00.524000 Testing bitcoind RPC connection to 'http://127.0.0.1:5888/' with username 'user'...
2017-12-14 16:57:00.541000 ...success!
2017-12-14 16:57:00.542000 Current block hash: 58a9c1ee25525b229cc0ef23b668d4f34e15890da1a01bc6cca4eb7162f94ecc
2017-12-14 16:57:00.542000 Current block height: 841701
2017-12-14 16:57:00.542000
2017-12-14 16:57:00.542000 Testing bitcoind P2P connection to '127.0.0.1:5889'...
2017-12-14 16:57:00.544000 ...success!
2017-12-14 16:57:00.544000
2017-12-14 16:57:00.545000 Determining payout address...
2017-12-14 16:57:00.545000 ...success! Payout address: Vw9utVuAm9wRxcBDDLGVfsGs7QXB6xP3oe
2017-12-14 16:57:00.545000
2017-12-14 16:57:00.545000 Loading shares...
</code></pre></div></div>
<p>Once the pool starts, you will also see message about the global hashing rate:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>2017-12-14 16:59:45.665000 Shares: 0 (0 orphan, 0 dead) Stale rate: ??? Efficiency: ??? Current payout: (0.0000)=0.0000 VTC
2017-12-14 16:59:45.665000 Pool: 127GH/s Stale rate: 15.2% Expected time to block: 23.7 minutes
2017-12-14 16:59:46.970000 Punishing share for 'Block-stale detected! height(b79db16c219589f65f4ca6782f8a07129e8f86f2f95ccda7a8ad517eb63160ae) < height(fbe6aa7f649150c2a8da4ae834ff912b4c8726877ba727e445b2d69ef327e789) or 1b018c96 != 1b018c98'! Jumping from b1b065c0 to 0e592982!
2017-12-14 16:59:46.974000 Punishing share for 'Block-stale detected! height(b79db16c219589f65f4ca6782f8a07129e8f86f2f95ccda7a8ad517eb63160ae) < height(fbe6aa7f649150c2a8da4ae834ff912b4c8726877ba727e445b2d69ef327e789) or 1b018c96 != 1b018c98'! Jumping from b1b065c0 to 0e592982!
2017-12-14 16:59:48.668000 P2Pool: 17788 shares in chain (13713 verified/17793 total) Peers: 6 (0 incoming)
2017-12-14 16:59:48.669000 Local: 111MH/s in last 33.7 seconds Local dead on arrival: ~0.4% (0-3%) Expected time to share: 6.5 hours
2017-12-14 16:59:48.669000 Shares: 0 (0 orphan, 0 dead) Stale rate: ??? Efficiency: ??? Current payout: (0.0000)=0.0000 VTC
2017-12-14 16:59:48.670000 Pool: 127GH/s Stale rate: 15.2% Expected time to block: 23.7 minutes
2017-12-14 16:59:49.858000 New work for worker! Difficulty: 0.500000 Share difficulty: 9745.649764 Total block value: 25.002251 VTC including 4 transactions
</code></pre></div></div>
<p>This means your pool is working! We’re now ready to connect CCMiner and start mining.</p>
<h1 id="installing-ccminer">Installing CCMiner</h1>
<p>CCMiner is a utility for mining on Nvidia graphics cards. If you don’t have Nvidia, you may need to use something else, like the Vertcoin one-click miner.</p>
<p>Installing CCMiner is as easy as navigtaing to the <a href="https://github.com/tpruvot/ccminer/releases">Github Release page</a> again. Find the release for your OS, for me it’s <code class="highlighter-rouge">ccminer-x86-2.2.3-cuda9.7z</code>. Go ahead and download this file, and unzip it. You may need to install 7zip to unzip if you don’t already have it. Inside the CCMiner folder will be a Windows batch file titled something like <code class="highlighter-rouge">RUN-CREA</code> - you can either edit this file or create a new batch file to run. Inside the file, add:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ccminer-x64 -a lyra2v2 -i 20 -o stratum+tcp://$local_ip:$port -u $wallet_address -p any_word_here
</code></pre></div></div>
<p>The $local_ip will be the IP of your pool. In my case it’s a 192.168 address because that’s what I told it to bind on. If you’re mining on the same computer as the wallet, you can probably use <code class="highlighter-rouge">127.0.0.1</code>. The port is the same - use the port you told the pool to use. Make sure you set you wallet address so you can get paid, and set -p to any random identifier.</p>
<p>Now your miner should start! You should see accepted shares if everything is working properly:
<img src="/images/vertcoin-miner.png" alt="alt text" title="Vertcoin miner working" /></p>
<h1 id="conclustion">Conclustion</h1>
<p>Hopefully this was helpful! I’m just getting started so if anyone has any tips/tricks let me know!</p>Daniel Parkerdcparker88@gmail.comOverview I’ve (along with everyone else recently) been fascinated with cryptocurrency recently, and have been learning more and more about it. One of the most interesting things to me is the act of mining the coins. I’ve played with mining in the past, but with all the new popularity, mining the well-known coins is basically impossible for the average person. There are many altcoins out there - almost too many to choose from. I’ve been doing some reading lately, and Vertcoin caught my eye. I recently spent some time setting up my own local P2Pool for Vertcoin, and pointing a miner at that local pool. This is meant to be a guide for: installing a wallet creating your own P2Pool server starting to mine using your poolTIL - Creating RAID volumes in MegaCLI2017-11-20T06:00:00+00:002017-11-20T06:00:00+00:00https://danielparker.me/til/megacli/megaraid/mega-cli-TIL<h1 id="overview">Overview</h1>
<p>I learned something new today. At work, we have a decent number (~300 or so) bare metal servers that my teams use for higher throughput workloads - things like <a href="/categories/#cassandra">Cassandra</a> or Kafka. These servers all have anywhere from 8 to 24 hard drives and MegaRAID controllers. In the past, the hard drives/RAID/data directories were created for us by a different team, so we had no control over RAID level, JBOD, anything. Recently, we’ve wanted to change the RAID configuration on some of these servers. This brought us to the MegaCLI command-line utility. This tool turned out to be very hard to use, as there doesn’t seem to be much documentation at all. I am going to try and document the process we went through here.</p>
<h1 id="installing-megacli">Installing MegaCLI</h1>
<p>First, we needed to make sure the RAID controller on our boxes was supported. You can check the RAID controller with this command: <code class="highlighter-rouge">lspci | grep -i raid</code>. You should see something like:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>02:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] (rev 05)
</code></pre></div></div>
<p>According to <a href="http://hwraid.le-vert.net/wiki/LSIMegaRAIDSAS#a2.Linuxkerneldrivers">this article</a>, the RAID controller on our boxes is supported for MegaCLI. The README for MegaCLI also contains the following:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Supported Controllers
==================
MegaRAID SAS 8208ELP
MegaRAID SAS 8208XLP
MegaRAID SAS 8204ELP
MegaRAID SAS 8204XLP
</code></pre></div></div>
<p>Once we’ve confirmed support, it’s time to install:</p>
<ol>
<li>Download MegaCLI from the Broadcom website. You’ll have to agree before <a href="https://www.broadcom.com/support/download-search?dk=megacli">downloading</a></li>
<li>Once the file is downloaded, unzip the files. There will be an RPM inside: <code class="highlighter-rouge">MegaCli-4.00.16-1.i386.rpm</code></li>
<li>Install with <code class="highlighter-rouge">yum localinstall MegaCli-4.00.16-1.i386.rpm</code></li>
<li>By default, MegaCLI is installed to the <code class="highlighter-rouge">/opt/MegaRAID/MegaCli/MegaCli64</code> file. You can make this easier to use by setting an alias: <code class="highlighter-rouge">alias megacli='/opt/MegaRAID/MegaCli/MegaCli64'</code></li>
<li>Test that MegaCLI is working by listing all the physical drives in your server: <code class="highlighter-rouge">/opt/MegaRAID/MegaCli/MegaCli64 -PDList -a0</code> This should return something for each drive, similar to:</li>
</ol>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Enclosure Device ID: 32
Slot Number: 25
Drive's position: DiskGroup: 0, Span: 0, Arm: 1
Enclosure position: 1
Device Id: 25
WWN: 50000C0F02C81479
Sequence Number: 4
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
Raw Size: 279.396 GB [0x22ecb25c Sectors]
Non Coerced Size: 278.896 GB [0x22dcb25c Sectors]
Coerced Size: 278.875 GB [0x22dc0000 Sectors]
Sector Size: 0
Firmware state: Online, Spun Up
Device Firmware Level: D1S4
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x50000c0f02c8147a
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: WD WD3001BKHG D1S4WX91E13KNW87
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :42C (107.60 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Port-1 :
Port status: Active
Port's Linkspeed: Unknown
Drive has flagged a S.M.A.R.T alert : No
</code></pre></div></div>
<p>There are some important terms used here that I want to explain:</p>
<ul>
<li>Adapter: This is the actual RAID controller we’re using. On all of our servers there is only one, and it’s designated by the number zero. <code class="highlighter-rouge">-a0</code> in the command above is referring to “adapter 0.”</li>
<li>Enclosure Device ID: This is the physical chassis number the drive is attached to, represented by an ID. On our servers, all drives have the same ID, but this won’t always be the case.</li>
<li>Physical Drives: Actual physical (spinning or SSD) drives connected to the server, each will have an ID of <code class="highlighter-rouge">$EnclosureID:$DriveID</code></li>
<li>Virtual Drives: This is a virtual drive containing any number of physical drives in a RAID or JBOD configuration. These also have an ID similar to the adapter ID.</li>
</ul>
<p>In our use case, we had two types of disk configuration we wanted: RAID 10 and JBOD.</p>
<h1 id="jbod">JBOD</h1>
<p>Our first use case was to set up <a href="https://en.wikipedia.org/wiki/Non-RAID_drive_architectures#JBOD">JBOD</a> for some of our Cassandra servers. The servers with SSDs installed had the following config:</p>
<ul>
<li>8 SSD drives
<ul>
<li>500 gb each</li>
</ul>
</li>
</ul>
<p>We followed these steps to set up JBOD with MegaCLI:</p>
<ol>
<li>Validate all 8 drives appear: <code class="highlighter-rouge">/opt/MegaRAID/MegaCli/MegaCli64 -PDList -a0</code>. In our case, there are 8 entries that look like this:</li>
</ol>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Enclosure Device ID: 8
Slot Number: 7
Drive's position: DiskGroup: 1, Span: 1, Arm: 0
Enclosure position: 1
Device Id: 16
WWN: 55cd2e404b7746ef
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SATA
Raw Size: 447.130 GB [0x37e436b0 Sectors]
Non Coerced Size: 446.630 GB [0x37d436b0 Sectors]
Coerced Size: 446.102 GB [0x37c34800 Sectors]
Sector Size: 512
Logical Sector Size: 512
Physical Sector Size: 4096
Firmware state: Online, Spun Up
Device Firmware Level: 0370
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x584b261c2fb68186
Connected Port Number: 0(path0)
Inquiry Data: BTWL5043014Z480QGN INTEL SSDSC2BB480G4 D2010370
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Solid State Device
Drive: Not Certified
Drive Temperature :12C (53.60 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Drive's NCQ setting : Enabled
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Drive has flagged a S.M.A.R.T alert : No
</code></pre></div></div>
<ol>
<li>List the current virtual drives: <code class="highlighter-rouge">/opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -Lall -a0</code>. For our servers - this should already show a single virtual drive for the OS. It’s important to note the physical drives that are part of this - as we don’t want to use them for our new array. This could corrupt or even delete the OS on the server, so be careful.</li>
<li>Figure out the Enclosure Device ID of the 8 drives we want to JBOD: <code class="highlighter-rouge">/opt/MegaRAID/MegaCli/MegaCli64 -PDList -a0 | grep -e '^Enclosure Device ID:' | head -1 | cut -f2- -d':' | xargs</code></li>
<li>Figure out the slot number of the 8 drives we are going to JBOD. The only way I was able to do this was manually look through the output of the first command. In our case - the slot numbers were 1 - 8.</li>
<li>Set all the drives to “Good” in MegaCLI (this marks them as unconfigured but spun up): <code class="highlighter-rouge">/opt/MegaRAID/MegaCli/MegaCli64 -PDMakeGood -PhysDrv[$id:1,$id:2,$id:3,$id:4,$id:5,$id:6,$id:7,$id:8] -Force -a0</code> <em>note</em> the numbers 1 - 8 are the slot numbers of the disks, make sure the change these to match your slot numbers.</li>
<li>Check and see if JBOD support is enabled: <code class="highlighter-rouge">/opt/MegaRAID/MegaCli/MegaCli64 AdpGetProp EnableJBOD -aALL</code>. On all of our servers, this returns: <code class="highlighter-rouge">Adapter 0: JBOD: Disabled</code>, so we need to turn it on.</li>
<li>If JBOD is disabled from step 6, turn JBOD support on: <code class="highlighter-rouge">/opt/MegaRAID/MegaCli/MegaCli64 AdpSetProp EnableJBOD 1 -a0</code></li>
<li>Set each disk from above to be in JBOD mode: <code class="highlighter-rouge">/opt/MegaRAID/MegaCli/MegaCli64 -PDMakeJBOD -PhysDrv[$id:1,$id:2,$id:3,$id:4,$id:5,$id:6,$id:7,$id:8] -a0</code></li>
<li>Once the disks are set to JBOD, each one should appear to the OS. You can check with <code class="highlighter-rouge">lsblk</code>:</li>
</ol>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sdd 8:48 0 279.5G 0 disk
sde 8:64 0 279.5G 0 disk
sdf 8:80 0 279.5G 0 disk
sdg 8:96 0 279.5G 0 disk
sdl 8:176 0 279.5G 0 disk
sdk 8:160 0 279.5G 0 disk
sdn 8:208 0 279.5G 0 disk
sdc 8:32 0 279.5G 0 disk
</code></pre></div></div>
<p>sd* above is the disk id assigned by the OS. Use that in the commands below.</p>
<p>Now that we have the disks and the OS can see them, it’s time to format them:</p>
<ol>
<li><code class="highlighter-rouge">mkfs.xfs -s size=4096 /dev/$disk_id -f</code></li>
<li>Create a directory to mount the disk: <code class="highlighter-rouge">mkdir /data1/</code></li>
<li>Mount the disk: <code class="highlighter-rouge">mount -t xfs -o noatime /dev/$disk_id /data1</code></li>
<li>add an entry to fstab so it survives a reboot: <code class="highlighter-rouge">echo "/dev/${disk_id} /data1 xfs noatime" | sudo tee -a /etc/fstab</code></li>
<li>Repeat for each disk.</li>
</ol>
<p>You should now have 8 disks mounted in different directories on the server. I like to check with <code class="highlighter-rouge">df -h</code>:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Filesystem Size Used Avail Use% Mounted on
/dev/sdd 276G 50G 212G 19% /data1
/dev/sdl 276G 46G 216G 18% /data2
/dev/sdg 276G 54G 208G 21% /data3
/dev/sdf 276G 53G 209G 21% /data4
/dev/sdk 276G 52G 210G 20% /data5
/dev/sdc 276G 63G 199G 24% /data6
/dev/sde 276G 47G 215G 18% /data7
/dev/sdn 276G 48G 214G 19% /data8
</code></pre></div></div>
<h1 id="raid-10">RAID 10</h1>
<p>Another use case we had was to set our disks to <a href="https://en.wikipedia.org/wiki/Nested_RAID_levels#RAID_10_.28RAID_1.2B0.29">RAID 10.</a> This is a combination of RAID 0 and 1, and allows your array to survive the failure of a disk. The servers we were configuring RAID 10 on had the following config:</p>
<ul>
<li>24 hard drives
<ul>
<li>1 TB each</li>
</ul>
</li>
</ul>
<p>To set these up in a RAID 10 array, follow these steps:</p>
<ol>
<li>Validate all 24 drives appear: <code class="highlighter-rouge">/opt/MegaRAID/MegaCli/MegaCli64 -PDList -a0</code>. In our case, there are 24 entries that look like this:</li>
</ol>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Enclosure Device ID: 32
Slot Number: 21
Drive's position: DiskGroup: 1, Span: 0, Arm: 21
Enclosure position: 1
Device Id: 21
WWN: 5000C50056C0B778
Sequence Number: 4
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
Raw Size: 931.512 GB [0x74706db0 Sectors]
Non Coerced Size: 931.012 GB [0x74606db0 Sectors]
Coerced Size: 931.0 GB [0x74600000 Sectors]
Sector Size: 0
Firmware state: Online, Spun Up
Device Firmware Level: AS09
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x5000c50056c0b779
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: SEAGATE ST91000640SS AS099XG4QFD1
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :25C (77.00 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Port-1 :
Port status: Active
Port's Linkspeed: Unknown
Drive has flagged a S.M.A.R.T alert : No
</code></pre></div></div>
<ol>
<li>List the current virtual drives: <code class="highlighter-rouge">/opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -Lall -a0</code></li>
<li>Figure out the Enclosure Device ID ($id below) of the 24 drives we want to RAID: <code class="highlighter-rouge">/opt/MegaRAID/MegaCli/MegaCli64 -PDList -a0 | grep -e '^Enclosure Device ID:' | head -1 | cut -f2- -d':' | xargs</code></li>
<li>Figure out the slot number of the 24 drives. The only way I was able to do this was manually look through the output of the first command. In our case - the slot numbers were 0 - 23.</li>
<li>Set all the drives to “Good” in MegaCLI (this marks them as unconfigured but spun up):</li>
</ol>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/opt/MegaRAID/MegaCli/MegaCli64 -PDMakeGood -PhysDrv[$id:0,$id:1,$id:2,$id:3,$id:4,$id:5,$id:6,$id:7,$id:8,$id:9,$id:10,$id:11,$id:12,$id:13,$id:14,$id:15,$id:16,$id:17,$id:18,$id:19,$id:20,$id:21,$id:22,$id:23] -Force -a0
</code></pre></div></div>
<p><em>note</em> the numbers 0 - 23 are the slot numbers of the disks, make sure the change these to match your slot numbers.</p>
<ol>
<li>Set up the RAID 10 span:</li>
</ol>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/opt/MegaRAID/MegaCli/MegaCli64 -CfgSpanAdd -r10 -Array1[$id:0,$id:1,$id:2,$id:3,$id:4,$id:5,$id:6,$id:7,$id:8,$id:9,$id:10,$id:11] -Array2[$id:12,$id:13,$id:14,$id:15,$id:16,$id:17,$id:18,$id:19,$id:20,$id:21,$id:22,$id:23] -a0
</code></pre></div></div>
<ol>
<li>Once the RAID array is created, it should appear as a single disk with <code class="highlighter-rouge">lsblk</code>:</li>
</ol>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sdb 8:16 0 21.8T 0 disk
</code></pre></div></div>
<p>Now we follow the same steps as above to format/mount the disk.</p>
<ol>
<li><code class="highlighter-rouge">mkfs.xfs -f -d sunit=128,swidth=2048 -L data0 /dev/sdb</code></li>
<li>Create a data directory: <code class="highlighter-rouge">mkdir /data0</code></li>
<li>Add an entry to fstab: <code class="highlighter-rouge">echo "/dev/sdb /data0 xfs noatime" | sudo tee -a /etc/fstab</code></li>
<li>Mount the drive: <code class="highlighter-rouge">mount /data0</code></li>
</ol>
<h1 id="additional-settings">Additional Settings</h1>
<p>Some other things we needed to do were set things like readahead, disk scheduler, etc. Here’s how:</p>
<ul>
<li>Set write-through: <code class="highlighter-rouge">/opt/MegaRAID/MegaCli/MegaCli64 -LDSetProp WT -L1 -a0</code></li>
<li>Set direct, no cache: <code class="highlighter-rouge">/opt/MegaRAID/MegaCli/MegaCli64 -LDSetProp -Direct -L1 -a0</code></li>
<li>Turn readahead off: <code class="highlighter-rouge">/opt/MegaRAID/MegaCli/MegaCli64 -LDSetProp NORA -L1 -a0</code></li>
<li>Change the scheduler: <code class="highlighter-rouge">echo deadline > /sys/block/${disk_id}/queue/scheduler</code></li>
</ul>
<h1 id="cleaning-up">Cleaning Up</h1>
<p>If you mess up and need to delete an array, or just want to convert between different RAID levels, you can delete existing arrays following these steps:</p>
<ol>
<li>List the existing virtual drives: <code class="highlighter-rouge">/opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -Lall -a0</code></li>
<li>Delete the virtual drive you don’t want: <code class="highlighter-rouge">/opt/MegaRAID/MegaCli/MegaCli64 -CfgLdDel -L$VIRTUAL_DRIVE_ID -a0</code></li>
<li>Set everything back to good: <code class="highlighter-rouge">sudo /opt/MegaRAID/MegaCli/MegaCli64 -PDMakeGood -PhysDrv[$EnclosureID:$SlotID] -Force -a0</code></li>
</ol>
<h1 id="conclusion">Conclusion</h1>
<p>Hopefully this helps if anyone is looking to run some MegaRAID commands. It took us a bit to figure it all out, but once we got some scripts together we can now format our servers with minimal issues.</p>Daniel Parkerdcparker88@gmail.comOverview I learned something new today. At work, we have a decent number (~300 or so) bare metal servers that my teams use for higher throughput workloads - things like Cassandra or Kafka. These servers all have anywhere from 8 to 24 hard drives and MegaRAID controllers. In the past, the hard drives/RAID/data directories were created for us by a different team, so we had no control over RAID level, JBOD, anything. Recently, we’ve wanted to change the RAID configuration on some of these servers. This brought us to the MegaCLI command-line utility. This tool turned out to be very hard to use, as there doesn’t seem to be much documentation at all. I am going to try and document the process we went through here.