Skip to main content.

2016-09-19 Quick Jenkins install

Tue, 20 Sep 2016 09:59:35 -0400

Well I must have over a hundred tips I wrote up on sysadmin and development tasks over the past few years I still need to catch up on putting into this blog.

This is a quick example of installing and enabling Jenkins on Ubuntu. Jenkins is a continuous integration system. Basically it provides a web interface that shows the jobs and their results and the same interface is used to configure it, extend it, automate or manually control jobs. It is commonly used to build and test software on other systems and show the results in a single place. I have been using it for over four years. (Prior to that, I did research it, but wrote my own lightweight and portable testing framework that had at one time or another over 40 systems attached to it.)

While jenkins components were in my "apt-cache search jenkins" list I didn't see any main or meta package and all docs I found said to get from upstream:

$ wget -q -O - | sudo apt-key add -

$ echo deb binary/ | sudo tee -a /etc/apt/sources.list.d/jenkins.list

$ sudo apt-get update

$ sudo apt-get install jenkins

The above also enabled and started the jenkins service. It is a java-based software. By default, its webserver listens on port 8080. So visit it. At first start, it needs a password. Mine was at /var/lib/jenkins/secrets/initialAdminPassword . It was running Jenkins 2.7.4.

Then choose suggested plugins or choose own. I chose my own. I selected "SSH plugin". It has others already checked so I left those suggestions as-is. Then it showed the "Getting started" progress bar and showed the plugins getting installed and their required dependencies.

Next create first admin user, and click the "save and finish" button. Then click "start using jenkins" button when it says setup is complete.

Welcome to Jenkins.

The main interface has a "Please create new jobs to get started." link. Follow it. Enter item name. For a quick test, I entered "ps". Then choose type. I chose Freestyle project. Then can click OK. Then it allows further details and options for it, like parameterized, discard old builds, use git, setup build triggers like periodic or SCM polling, etc.

Under the "Build: Add built step" section, I selected execute shell script on remote hosting using ssh. For the action I entered "ps auxwww". Note the ssh site is empty dropdown and cannot type into it. So save and then at main jenkins go to Configure System (/configure link). And then for SSH remote hosts click the Add button for SSH sites (that projects will want to connect). Be sure to add port number (22). I created a testing account for this (useradd -m ...; sudo passwd ...).

Back at my new project configure page it now shows the user@host:port entry for the SSH site. I clicked save. Note that the common Jenkins use is to have a java-based Jenkins server running on each remote system. The SSH way is a lightweight way instead.

I clicked Build now and it said job scheduled and a moment later I had a build history #1. Clicked it and then "Console output" and saw the ps output.

Jenkins has a lot of plugins and features for integrating with many build and test systems with properly showing results and understanding output such as identifying changes between runs.

2016-01-29 Removing some ubuntu processes

Fri, 29 Jan 2016 08:43:08 -0500

Long ago I wrote some articles and gave some lectures about simplifying Linux systems by running less processes. Today I looked at my Ubuntu laptop and saw it always has a load average around 0.33 and over 80 processes running. There were some obvious programs I didn't need and some I didn't know about. I used dpkg -S to find out what packages the processes were included in. (Some didn't have man pages.)

I removed modemmanager, edited /etc/default/console-setup and moved /etc/init/tty[456].conf to reduce my virtual terminals, removed tor and a GeoIP database for Tor, removed colord (why do I need a system daemon to manage device colour profiles?), removed winbind and its PAM plugin (why do I care about Windows domain user/group lookup?), and ntp. I also added to my crontab to run ntpdate twice a day instead. There is more to understand -- like why do I need a daemon to provide a DBUS interface for adding, modifying, and deleting user accounts (accountsservice package)?

2016-01-16 BIND DNS book

Sat, 16 Jan 2016 17:06:11 -0500

This last month, we published another BIND DNS book. Our second edition of our custom extended and improved reference manual. The first book was printed in September 2007. Since then we published around nine other books (including a printed DNSSEC specifications book) — and I started working full time in the DNS field (probably because the first edition helped me in the door). For the first few years, we sold lots of the book and it was given to many BIND DNS students throughout the world. We were asked several times for an updated edition, but each time we got started, we ended up getting far behind as the technology (and book) changed. The first edition was done in LyX without any real revision control. The second edition was done in Docbook using Git.

Since the first edition, BIND added many new features, including:

  1. dlz search
  2. logging file versions
  3. DSCP support for traffic classification for quality of service
  4. managed-keys for automated updates of DNSSEC trust anchors
  5. rndc addzone, delzone (and allow-new-zones)
  6. rndc flushtree to selective remove zones from cache
  7. auto-dnssec for automated signing
  8. rndc signing
  9. rndc scan and automatic-interface-scan
  10. bindkeys-file
  11. check-dup-records
  12. check-spf
  13. deny-answer-addresses and deny-answer-aliases for content filtering to prevent DNS rebinding attacks
  14. disable-ds-digests
  15. dns64 for AAAA queries to IPv4 mapping
  16. dnssec-loadkeys-interval
  17. dnssec-secure-to-insecure
  18. dnssec-update-mode
  19. dnssec-validation auto
  20. filter-aaaa and filter-aaaa-on-v4 and filter-aaaa-on-v6
  21. GeoIP (and geoip-directory)
  22. inline-signing
  23. max-recursion-depth and max-recursion-queries
  24. max-rsa-exponent-size
  25. max-zone-ttl
  26. no-case-compress
  27. nosit-udp-size
  28. prefetch to requery for popular lookups to keep in cache
  29. rate-limit (with 15 options)
  30. request-nsid
  31. reserved-sockets
  32. resolver-query-timeout
  33. response-policy (with 9 policies)
  34. rndc secroots and secroots-file
  35. serial-update-method
  36. session-keyalg, session-keyfile, and session-keyname
  37. sig-signing-nodes, sig-signing-signatures, sig-signing-type, and sig-validity-interval
  38. tkey-gssapi-keytab
  39. use-v4-udp-ports and use-v6-udp-ports
  40. dnssec-dnskey-kskonly
  41. masterfile-format to keep zone files in raw or memory instead of text
  42. named changed behavior to remember the case which could be turned off with no-case-compress
  43. dlz
  44. in-view to share master files
  45. static-stub zones
  46. redirect zones
  47. additional update-policy policies: local, tcp-self, 6to4-self, zonesub, and external
  48. server-addresses and server-names
  49. rndc sync
  50. rndc zonestatus
  51. delv tool
  52. dnssec-checkds tool
  53. dnssec-coverage tool
  54. dnssec-dsfromkey tool
  55. dnssec-importkey tool
  56. dnssec-keyfromlabel tool
  57. dnssec-revoke tool
  58. dnssec-settime tool
  59. dnssec-verify tool
  60. named-journalprint tool
  61. named-rrchecker tool
  62. ddns-confgen tool
  63. arpaname tool
  64. genrandom tool
  65. isc-hmac-fixup tool
  66. nsec3hash tool

In addition, some features were deprecated or changed:

Our book also covers several other bleeding edge features like:

  1. dyndb (dynamic database) for external data source
  2. buffered logging
  3. lwres-clients and lwres-tasks
  4. DNS cookies with cookie-algorithm, cookie-secret, nocookie-udp-size, require-server-cookie, send-cookie
  5. fetch-quota-params, fetches-per-server, fetches-per-zone
  6. limit of files concurrently open/li>
  7. geoip-use-ecs
  8. keep-response-order
  9. masterfile-style
  10. notify-rate and startup-notify-rate
  11. rndc nta for Negative Trust Anchors to temporarily disable DNSSEC validation (with nta-lifetime and nta-recheck)
  12. nxdomain-redirect
  13. request-expire
  14. response-policy log
  15. serial-update-method date
  16. servfail-ttl to cache SERVFAIL responses
  17. v6-bias
  18. edns-version
  19. tcp-only
  20. rndc managed-keys
  21. rndc modzone and showzone

The BIND DNS Administration Reference book is the only printed book covering all these topics. Note that the most popular DNS book is ten years old so cannot cover the above features as covered in our book. Our book also includes installation, examples of using vendor packages, and lots of other original content, plus detailed indexing and additional cross-referencing.

Book details are at or order it from your favorite book store.

2015-11-07 Learning

Sat, 07 Nov 2015 14:01:01 -0500

My first real job after I finished my bachelor's degree was as a Unix admin for a local Internet service provider. While I had some professional background in Windows installations, web development, and Windows-related bug writeups (including in Infoworld). I had no real commercial experience with Unix. My degree was in journalism and my previous degree was in Physical Education. For the past two years, I had a login on a Sun system (from the university) and a Debian Linux system (from the computer club) where I learned basic HTML and CGI programming using Perl. I wrote a website management system for news and magazine sites which I used for the university newspaper and sold to other news and magazine organizations.

I had installed Debian a few times and started reading Unix-related books. I experimented in a home lab of old junk computers implementing various services as if it was a commercial Internet enterprise.

Somehow I convinced the ISP (IWBC) to hire me even though I had no experience with the operating system they used. They did tell me that they wanted to hire someone with a bachelors degree, so that was good. They offered dial-up service, DNS, email, and web hosting for homes and companies. They also published online a website linked to pre-approved sites (like a "Yahoo" directory).

My role was to learn eveything I could from the existing head admin who was moving in three months. The guy was a jerk --- this was before I learned about the Unix BOFH (the sysadmin who takes out his anger on users, colleagues, and more). I followed him around the office with a notebook and watched him as he added user accounts on BSD/OS, added DNS zones in BIND, enabled and test email users, tweaked Sendmail, added virtual hosts to Apache, etc. I took pages and pages of notes when we upgraded or installed BSDI. I wrote down commands, command line switches, expected output, error messages, and more.

Soon he was gone, and I became the head admin. We only had around 500 domains (websites) but maybe around 10,000 dialup customers. Everyday we had a variety of typical sysadmin tasks. But I had my notebook. I worked with the sales and phone reps so they could handle most tasks without me (like changing passwords).

I wrote scripts to automate my work, such as adding new email users, or adding new domains included in named.conf or hosts in httpd.conf.

I joined various email lists to discuss the software I used with experts in their communities. I asked questions. And I took notes.

As I improved my routines and automated the processes, I had more and more free-time. My co-workers played online bingo and built servers for downloaded music.

I read man pages. I read more man pages. I checked out every Unix and related book from the local libraries and took careful notes.

I installed extra systems on surplus hardware to learn about later for development versions of the software we used or to test out alternative software. This resulted in the time I had a system root compromised; thank you BIND 8, as this taught me about server security and better administration practices. (This experience led me to years of working as a security consultant and auditor.)

Then I started answering more and more questions on the mailing lists. I tried to solve every problem posted. Before I read any follow-up emails, I tried to understand the issue, read the related docs, sometimes experiment with the software or configuration, and develop a solution or answer. I continued to take careful notes about the bugs and the solutions. For a while, I hosted a website with ISP Frequently Asked Questions.

Our ISP started to fail. The company spent all its money on expensive VOIP hardware, unused Oracle database licenses, and new tech staff that sat at their desks literally doing nothing --- no computers even assigned after months. Again as my co-workers played, I studied. I'd go to the developer's offices and ask them to teach me to code. I shared my examples with them and read their code. I found open source software that we used for our servers and for my day-to-day sysadmin tasks and thought of ways to improve them.

One issue was we didn't want to have hundreds or thousands of Unix user accounts simply just to offer email service. (That is the users didn't need to have any Unix login or user privileges.) So I found a light open source POP3 mail server and extended it to support virtual users. I shared back over ten patches over time and soon became maintaining a fork with a new name of the software. It was the only option out there at the time, and ISPs around the world even with tens of thousands of users started using it and I received lots of feedback and code contributions.

I started attending a Linux users group about an hour from muy house monthly and soon I was giving lectures and sharing my new knowledge. I sent proposals to local computer newspapers and magazines and wrote over 15 articles related to Unix and open source software.

I was able to get another job as a journalist and sysadmin to start a website for BSD news and tutorials. The company (, later Jupiter Media) sent me a $8000 server to install and I was actually paid another $3500 a month to learn! This is where I started with my operating system of choice. (Later that job extended to cover Apache news and tutorials, PHP news and tutorials, and even Linux news.)

I researched Unix sysadmin roles and did job task analysis surveys privately and publicly to understand work for web server operators. I installed lots of different solutions or alternative softwares to compare and contrast and to learn other points of view. As I experimented, if I found bugs or thought of enhancements or improvements, I would send my notes and even code fixes to the software developers. (These activities helped lead to my later work as a certification expert and a packaging expert.)

When the ISP failed, I was the last employee left. The only person to make sure the systems stayed running until they were powered down. (In hindsight I should have taken all the customers.) An accountant joined me at the empty office one day and told me they couldn't pay me -- for the third month in a row. He wrote down on his clipboard as I took multiple desks, a rack, several servers, office chairs, photocopier, and miscellaneous office supplies -- which I gave away for years and used for later learning and business.

Soon I began advertising my new expertise as a consultant and trainer ... but that is another story.

The point of this short period of learning is:

This short path in life was different than what I expected when I went to college, but it turned my entire life around. It changed my focus and gave me new direction. It jump-started a career in helping others via open source software.

2015-03-23 IETF

Sat, 07 Nov 2015 10:59:40 -0500

I am at the Internet Engineering Task Force meeting this week. The IETF is the primary organization that creates and publishes internet standards, such as HTTP for accessing web documents and for sending email. The IETF is an interesting group — as it doesn't have real membership, no voting, and all standards work is done voluntarily. All the published documents are freely available for individuals, companies, software developers, and hardware manufacturers to create solutions that will work with others. Computer scientists throughout the world have travelled to Dallas this week to improve the internet's security and efficiency.

I am joining sessions about Scalable DNS Service Discovery, DNS PRIVate Exchange, Home Networking, Hypertext Transfer Protocol (HTTP), Domain Name System Operations, DNS-based Authentication of Named Entities (DANE), Sunsetting IPv4, Dynamic Host Configuration, and maybe more.

The volunteer process of the entire effort that results in a better internet amazes me. While governments and companies may mandate some technologies, it is interesting that these core technologies are voluntarily created and voluntarily used. Another book idea?