-
Nginx Ingress and Windows Server 2012 R2 TLS issue
Several months ago I configured Elastic APM on our kubernetes (microk8s) cluster. It worked just fine for a
.NET 5
workloads running on the Linux containers. Recently I needed to enable APM for another.NET 5
project running on Windows 2012 R2 and I faced the following error:Thanks to Qualys SSL Labs I was able to quickly find out that our nginx-ingress is using TLS 1.2 and TLS 1.3 only, with a secure set of TLS ciphers. Unfortunately Windows Server 2012 R2 does not support secure TLS ciphers. So I had to enable TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (0xc028) (weak) cipher as a workaround for now.System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception. ---> System.Security.Authentication.AuthenticationException: Authentication failed because the remote party sent a TLS alert: 'HandshakeFailure'. ---> System.ComponentModel.Win32Exception (0x80090326): The message received was unexpected or badly formatted.
- To do so I first examined the arguments our ingress pods are using (see below) and found that I should name Config Map as nginx-load-balancer-microk8s-conf (and use the same namespace as ingress pods are using).
/nginx-ingress-controller --configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf --tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-tcp-microk8s-conf --udp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-udp-microk8s-conf --publish-status-address=127.0.0.1
- Then I created a ConfigMap I needed and then I pushed that template by running
kubectl apply -f config-map.yaml
kind: ConfigMap apiVersion: v1 metadata: name: nginx-load-balancer-microk8s-conf namespace: ingress data: ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384" ssl-protocols: "TLSv1.2 TLSv1.3"
- And the final step was to restart daemonset
kubectl rollout restart deployment nginx-ingress-microk8s-controller -n ingress
- To do so I first examined the arguments our ingress pods are using (see below) and found that I should name Config Map as nginx-load-balancer-microk8s-conf (and use the same namespace as ingress pods are using).
-
SlickRun on Ubuntu
* I’m using Ubuntu 20.04 as my main operating system for some time now. So I’d like to document some tips in my blog. I’ve been using SlickRun on Windows for years. It’s just great. I can launch any program or website. For example to open work item #123, I just type
wi 123
and it will create a correct URL for me and open that work item in my browser. The more generic example might begoogle abc
command to searchabc
on the eb. It does so by addingabc
to the url, so it’s something likehttps://www.google.com/search?q=abc
. I needed a similar workflow for Ubuntu and here’s my solution.Configure gRun as SlickRun
- I installed gRun the program that allows to launch programs and scripts using the
apt install grun
command. - I configured the
Alt+Q
hotkey for gRun under Settings -> Keyboard Shortcuts. (by default SlickRun is using that one) - Then I created
.grun
folder in my home directory using themkdir ~/.grun
command. - In that folder I created the
~/.gurn/grun-enable
script that will allow to configure new URLs:echo "xdg-open $1" > ~/.grun/$2 chmod +x ~/.grun/$2
- Now we need to add
.grun
folder into the PATH, so that we can launch commands without specifying /.grun prefix. To do so add the text below to the very end of~/.profile
and re-login.if [ -d "$HOME/.grun/" ] ; then PATH="$HOME/.grun/:$PATH" fi
- That’s it!
grun-enable https://www.google.com/search?q=\$1 google
. And use it by typinggoogle abc
in gRun or in terminal.VPN
Would like to connect to VPN via gRun or terminal? We’ve got you covered. Just putnmcli con up id CONNECTION_NAME_HERE
into~/.grun/vpn
and then mark it as executable viachmod +x ~/.grun/vpn
. Or you can do that with one liner:echo "nmcli con up id CONNECTION_NAME_HERE" > ~/.grun/vpn && chmod +x ~/.grun/vpn
What’s next?
So we’ve got an easy way to run virtually anything withAlt+Q
(or any other you used to). For me that makes a real difference, now I feel at home :) Hope you will like it to you, Dear Reader. …
- I installed gRun the program that allows to launch programs and scripts using the
-
Tracking Application Response Time with NGINX, Filebeat and Elastic Search
Recently we needed to enable Response Time monitoring on NGINX server. Let me try to summarise steps needed to bring response times from NGINX into Elastic Search.
NGINX Configuration
In order to do so we had to define a new log format. That topic is covered in much detail at lincolnloop.com back in Nov 09, 2010! In short you need to add log format intonginx.conf
Next step is to modifylog_format timed_combined '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' '$request_time $upstream_response_time $pipe';
access_log
directives to use the new format:Once configuration files have been updated runaccess_log /var/log/nginx/yourdomain.com.access.log timed_combined;
nginx -t
to test them. If NGINX likes your new configuration runnginx -s reload
so it will start using them.Filebeat Configuration
Filebeat is a lightweight shipper for logs. We are using it to deliver logs to Elastic Search cluster. To review logs and mertics we are using Kibana. Filebeat is usinggrok
patterns to parse log files. Basically all you need is to update agrok
pattern which is being used by Filebeat to parse NGINX logs. In my case it’s located atI added a new line to the end of the/usr/share/filebeat/module/nginx/access/ingest/pipeline.yml
patterns:
definition:Which is what I’ve got after that%{NUMBER:http.request.time:double} %{NUMBER:upstream.request.time:double} %{DATA:pipelined}
... patterns: - (%{NGINX_HOST} )?"?(?:%{NGINX_ADDRESS_LIST:nginx.access.remote_ip_list}|%{NOTSPACE:source.address}) - (-|%{DATA:user.name}) \[%{HTTPDATE:nginx.access.time}\] "%{DATA:nginx.access.info}" %{NUMBER:http.response.status_code:long} %{NUMBER:http.response.body.bytes:long} "(-|%{DATA:http.request.referrer})" "(-|%{DATA:user_agent.original})" %{NUMBER:http.request.time:double} (-|%{NUMBER:http.request.upstream.time:double}) %{DATA:http.request.pipelined} ...
- http.request.time variable represents full request time, starting when NGINX reads the first byte from the client and ending when NGINX sends the last byte of the response body.
- http.request.upstream.time variable represents time between establishing a connection to an upstream server and receiving the last byte of the response body.
- http.request.pipelined variable has “p” if request was pipelined, “.” otherwise.
Filebeat pipeline update
Please not that once you updatedpipeline.yml
file you will need to make Filebeat to push it to Elastic Search. You have several options here:- You can run
filebeat setup
command which will make sure everything is up-to-date in Elastic Search. - You can remove index manually from Elastic Search by running
DELETE _ingest/pipeline/filebeat-*-nginx*
command. Then start Filebeat - it will setup everything during start-up procedure.
Backward compatibility
If you happen to have old log files you’d like to be able to process, then you would specify two patterns:- One with performance metrics to match the
timed_combined
format. - Another without performance metrics to match the default format.
…patterns: - (%{NGINX_HOST} )?"?(?:%{NGINX_ADDRESS_LIST:nginx.access.remote_ip_list}|%{NOTSPACE:source.address}) - (-|%{DATA:user.name}) \[%{HTTPDATE:nginx.access.time}\] "%{DATA:nginx.access.info}" %{NUMBER:http.response.status_code:long} %{NUMBER:http.response.body.bytes:long} "(-|%{DATA:http.request.referrer})" "(-|%{DATA:user_agent.original})" %{NUMBER:http.request.time:double} (-|%{NUMBER:http.request.upstream.time:double}) %{DATA:http.request.pipelined} - (%{NGINX_HOST} )?"?(?:%{NGINX_ADDRESS_LIST:nginx.access.remote_ip_list}|%{NOTSPACE:source.address}) - (-|%{DATA:user.name}) \[%{HTTPDATE:nginx.access.time}\] "%{DATA:nginx.access.info}" %{NUMBER:http.response.status_code:long} %{NUMBER:http.response.body.bytes:long} "(-|%{DATA:http.request.referrer})" "(-|%{DATA:user_agent.original})"
-
Deployment Group provision in Azure Dev Ops (On Premise)
We are a long time users of Team Foundation Server (TFS). As you may know recently it’s been renamed into Azure Dev Ops. I absolutely love the new “Dev Ops” version (we are running v. 17.M153.5 by the way). But we faced two issues with it, so I’d like to document these here. 1. Build Agent registration If you need to register Build Agent, you have to include Project Collection Name into the url. For example previously it worked fine if you specify
https://tfs.example.com/tfs/
. But with Azure Dev Ops you have to includehttps://tfs.example.com/tfs/FooBar/
(FooBar is a collection name here). Otherwise you will get Client authentication required error. 2. Deployment Agent regionstration If you need to register agent into Deployment Group, you need to modify the PowerShell script a bit. In particular you have to add--unattended --token {PAT_TOKEN_HERE}
So instead of the command below which is part of the Registration script in Dev Ops “Deployment Group” screen..\config.cmd --deploymentpool --deploymentpoolname "DEV" --agent $env:COMPUTERNAME --runasservice --work '_work' --url 'https://tfs.example.com/tfs/'
it should be something like this.\config.cmd --deploymentpool --deploymentpoolname "DEV" --agent $env:COMPUTERNAME --runasservice --work '_work' --url 'https://tfs.example.com/tfs/' --unattended --token {PAT_TOKEN_HERE}
Otherwise you will be asked to provide url to DevOps again and then get Not Found error if you try to include Collection Name into Url. As I understand the second issue related to the same root case as a first one - without--unattanded
flag it was complaining about thehttps://tfs.example.com/tfs/
url. Then I included Collection Name in the url it was showing “Not Found” error because collection name appeared twice:https://tfs.example.com/tfs/{COLLECTION_NAME}/{COLLECTION_NAME}/_apis/connectionData?connectOptions=1&lastChangeId=-1&lastChangeId64=-1 failed. HTTP Status: NotFound
Similar issue discussed at https://github.com/microsoft/azure-pipelines-agent/issues/2565#issuecomment-555448786 …
-
OpenSSL saves the day
We needed to issue a tiny patch release for one of our legacy applications. To do so we had to order a new code-signing certificate. I was a bit surprised then build failed with Invalid provider type specified error. For the some reason it was failing to sign Click Once manifest. What’s interesting signtool.exe was able to use that certificate just fine… I was lucky enough to find amazing blog post at https://remyblok.tweakblogs.net/blog/11803/converting-certificate-to-use-csp-storage-provider-in-stead-of-cng-storage-provider I faced an issue thought… I was not able to find pvk.exe because the Dr. Stephen N Henson’s website (at http://www.drh-consultancy.demon.co.uk/pvk.html) was down and I found no mirrors out there… So I used a bit different approach to tackle it:
- I used OpenSSL to generate PVK out of PEM using the command below
openssl rsa -inform PEM -outform PVK -in demo.pem -out demo.pvk -passin pass:secret -passout pass:secret
- Then I used OpenSSL to generate PFX out of PVK & CER files (I had to export public key as Base-64 encoded X.509 (.CER) at first for below command to work properly)
openssl pkcs12 -export -out converted.pfx -inkey demo.pvk -in demo.cer -passin pass:secret -passout pass:secret
- I used OpenSSL to generate PVK out of PEM using the command below
-
Let's Encrypt or HTTPS for everyone
It’s a year since we are using free certificates on some of our production servers. So I decided to put together a tiny article highlighing how easy is to make connections to your server secure using Let’s Encrypt:
Let’s Encrypt
To enable HTTPS on your website, you need to get a certificate (a type of file) from a Certificate Authority (CA). Let’s Encrypt is a CA. In order to get a certificate for your website’s domain from Let’s Encrypt, you have to demonstrate control over the domain. With Let’s Encrypt, you do this using software that uses the ACME protocol, which typically runs on your web host.
More details at https://letsencrypt.org/getting-started/ACME Client for Windows - win-acme
To enable HTTPS on IIS website all you have to do is below 3 steps:- Find out Site ID in IIS (Open IIS Manager and click on the “Sites” folder)
- Download a Simple ACME Client for Windows
- Run ACME Client (letsencrypt.exe) passing Site ID and Email for notifications
letsencrypt.exe --plugin iissite --siteid 1 --emailaddress john.doe@example.com --accepttos --usedefaulttaskuser
…
-
Group Policies which could affect your Web Application
We are working on a web application which heavily depends on the following browsers’ features:
- Application Cache - it allows websites to ask browser to cache them, so that users are able to open these websites offline.
- Indexed DB - it allows websites to store data in the browser cache, so that all needed data will be available offline.
- Web Storage - it allows websites to store settings in the browser cache.
Group Policy
It’s common for Enterprises to adjust default IE 11 settings using Group Policies. In such cases some of the functionality will not be available. For example website may fail to work offline if it’s unable to store data into browser’s cache. We prepared a list of the settings which might have impact on the websites utilizing above browser features.Edge
- Computer Configuration -> Administrative Template -> Windows Components -> Microsoft Edge
- Allow clearing browsing data on exit. Not Configured by default. If Enabled could cause a data loss, also users won’t be able to open application offline.
IE 11
- Computer Configuration -> Administrative Template -> Windows Components -> Internet Explorer -> Internet Control Panel -> Advanced Page
- Empty Temporary Internet Files folder when browser is closed Disabled by default.
- Computer Configuration -> Administrative Template -> Windows Components -> Internet Explorer -> Internet Control Panel -> General Page -> Browsing History
- Allow websites to store application caches on client computers. Enabled by default.
- Set application caches expiration time limit for individual domains. The default is 30 days.
- Set maximum application cache resource list size. The default value is 1000.
- Set maximum application cache individual resource size. The default value is 50 MB.
- Set application cache storage limits for individual domains. The default is 50 MB.
- Set maximum application caches storage limit for all domains. The default is 1 GB.
- Set default storage limits for websites. Not Configured by default.
- Allow websites to store indexed databases on client computers. Enabled by default. Required for the application to be available offline.
- Set indexed database storage limits for individual domains. The default is 500 MB.
- Set maximum indexed database storage limit for all domains. The default is 4 GB.
-
git-crypt - transparent file encryption in git
Here at Compellotech we are using Octopus to automate all of our deployments for several years now. Recently we started to accommodate Infrastructure as Code (IAC) approach to simplify environments management. It allows us to spin new environments right from Octopus dashboard. We are using Azure Key Vault to store secret data (such as SSL Certificates). And I just came across an interesting alternative git-crypt. It looks very convenient.
git-crypt enables transparent encryption and decryption of files in a git repository. Files which you choose to protect are encrypted when committed, and decrypted when checked out.
…
-
SQL Server Managed Backup to Microsoft Azure
Recently we migrated one of our projects to SQL Server 2016. As part of migration we enabled TDE for some databases. Next step was to configure backups. On our old SQL Server 2008 we already used to backup to Azure. It’s very convenient! So we were happy to use Managed Backup feature of SQL Server 2016. There is really good step-by-step tutorial on how to setup it on MSDN I just want to note that then you configure “instance level” backups, keep in mind that you will have to apply the same settings to existing databases manually. So it makes sense to first configure “Instance Level” backup settings and then restore your databases. It might save you a bit of time. It was a breeze to configure Managed Backup… very smooth experience. Highly recommend! …
-
Rethink DB - The open-source database for the real-time web
Couple of months ago I came across Rethink DB - The open-source database for the real-time web. I’m really interesting about real-time web tools and technologies. Last year I played with Meteor. And I still think it’s pretty nice framework. It’s great especially for simple projects. What I don’t like about Meteor is that you have to opt-in into all decisions them made. For example, you have to use MongoDB (at least at the moment) you can’t use npm packages (at least at the moment). As I know Meteor team is moving to elaborate these issues. As for Firebase it’s great, but again you have to opt-in and there is a possibility that you’ll have to switch from it at some point if your project does not fit well anymore. I’m looking into the stack which allows rapid development of real-time apps and in the same gives me all options. So I can easily make any decisions which fits better for the given project. That’s why Rethink DB looks so interesting. First of all it’s a powerful, easy to use and configure document database. You can configure sharding and replication in a few clicks. You can create cluster very easily again using fancy web UI!
More other Rethink DB allows you to subscribe to change notifications. For example, NodeJS application would subscribe to changes in messages table in just a few lines and then it will push changes to clients using socket.io. Another use case is to send data into Elastic Search to allow full text search. The great thing is all aspects are under you control. You decide what exactly send to Elastic Search, so instead of sending the whole document you just send fields you want to be searchable. Same way you decide what to send do clients and you can easily customize that at any point. If you’d like to learn more about Rethink DB there is a great RethinkDB Fundamentals course at Pluralsight RethinkDB team recently released Horizon - realtime, open-source backend for JavaScript apps. As you can expect it’s using RethinkDB as a central component. …