Nothing Special   »   [go: up one dir, main page]

BSD Magazine - November 2017

Download as pdf or txt
Download as pdf or txt
You are on page 1of 60

IS AFFORDABLE

FLASH STORAGE
OUT OF REACH?
NOT ANYMORE!

IXSYSTEMS DELIVERS A FLASH ARRAY


FOR UNDER $10,000.

Introducing FreeNAS® Certified Flash: A high performance all-


flash array at the cost of spinning disk.

Unifies NAS, SAN, and object storage to support Perfectly suited for Virtualization, Databases,
multiple workloads Analytics, HPC, and M&E
Runs FreeNAS, the world’s #1 software-defined 10TB of all-flash storage for less than $10,000
storage solution Maximizes ROI via high-density SSD technology
Performance-oriented design provides maximum and inline data reduction
throughput/IOPs and lowest latency Scales to 100TB in a 2U form factor
OpenZFS ensures data integrity

The all-flash datacenter is now within reach. Deploy a FreeNAS Certified Flash array
today from iXsystems and take advantage of all the benefits flash delivers.

Call or click today! 1-855-GREP-4-IX (US) | 1-408-943-4100 (Non-US) | www.iXsystems.com/FreeNAS-certified-servers

Copyright © 2017 iXsystems. FreeNAS is a registered trademark of iXsystems, Inc. All rights reserved.

2
DON’T DEPEND
ON CONSUMER-
GRADE STORAGE.
KEEP YOUR DATA SAFE!

USE AN ENTERPRISE-GRADE STORAGE


SYSTEM FROM IXSYSTEMS INSTEAD.

The FreeNAS Mini: Plug it in and boot it up — it just works.

Runs FreeNAS, the world’s #1 software-defined Backed by a 1 year parts and labor warranty, and
storage solution supported by the Silicon Valley team that designed
Unifies NAS, SAN, and object storage to support and built it
multiple workloads Perfectly suited for SoHo/SMB workloads like
Encrypt data at rest or in flight using an 8-Core backups, replication, and file sharing
2.4GHz Intel® Atom® processor Lowers storage TCO through its use of enterprise-
OpenZFS ensures data integrity class hardware, ECC RAM, optional flash, white-
glove support, and enterprise hard drives
A 4-bay or 8-bay desktop storage array that scales
to 48TB and packs a wallop

And really — why would you trust storage from anyone else?

Call or click today! 1-855-GREP-4-IX (US) | 1-408-943-4100 (Non-US) | www.iXsystems.com/Freenas-Mini or purchase on Amazon.

Intel, the Intel logo, Intel Inside, Intel Inside logo, Intel Atom, and Intel Atom Inside are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

3
EDITOR’S
WORD

Dear Readers,

I hope you are well. The editor’s word will sound a little bit different since we are close to ushering the final
edition of the year. It’s a month that we reflect on our lives, our goals, and most importantly, plans on what to
do in the following year. I hope you enjoyed all the monthly issues of 2017. Suffice to say, there will be one
more issue at the end of December just to crown this wonderful year of 2017. We really appreciate your
readership and engagement, and hope to provide you with more meaningful content in 2018.

I hope you started doing your New Year’s lists and Christmas presents lists. I believe it’s that time of the year
when we should have some fun and joy by providing surprises for others. I really like Christmas time and the
mood it sets towards the end of the year. I hope that you will have time to prepare and eventually enjoy the
good moments during this festive season before we come to the close of the year.

I also hope that you have prepared a list on what you would like to learn in 2018. Which interesting tool and
technology have you found lately during your free time? Please share them with me so that I can surprise you
with an online course on this tool or technology in 2018. With BSD magazine, be sure to learn about your
favourite tool or technology. This is one of our obligations to our readers. Hence, I look forward to your emails.

Now, let’s take a peek into this issue. You will find many great and technically interesting articles for you. I really
like all the articles, and I am thankful to all the authors for their patience when we were preparing them. I invite
you to look over the list of articles on the next page. Lastly, a big thank you to all our reviewers for their valuable
suggestions on how to make the articles better.

So, just sit back, get a light drink, and engage with the authors’ minds.

Enjoy reading,

Ewa & The BSD Team



ewa@bsdmag.org Note!

Remember to read TOOL REVIEW


on page 6 and learn about Web
Development Forensics with
BugReplay by David Carlier


4
TABLE OF CONTENTS
FREEBSD DATABASE

OpenLDAP Directory Services in FreeBSD (I). Using PostgreSQL Foreign Data Wrapper to Keep
Dynamic Configuration Fundamentals 08
 Track of Files 42

José B. Alós
 Luca Ferrari

The main objective of this article is to introduce This paper proposes a simple setup of a File System
directory services managed under the LDAP FDW that allows a system administrator or an
protocol, and to illustrate a new configuration application to query the filesystem to get information
approach known as Online LDAP Configuration about files, as well as storing at least one historical
(OCL) which was introduced in OpenLDAP v2.3. version of the latter.

Fluentd For Centralized Logging II: 
 ADMIN


Fluentd-UI and Suricata IDS 22

Andrey Ferriyan
 Free RDP Configuration 48

Andrey will explain in detail how to collect our logs in Loris Zimmerman

the server then forward and process these logs so In this article, you will learn how to setup HP t620
we can have better and meaningful information. He Thin Client with Linux Kernel.
is collecting logs from Suricata
INTERVIEW
(https://suricata-ids.org), which is a signature-based
Intrusion Detection System (IDS) like Snort.
Interview with Abdorrahman Homaei 52

Ewa & The BSD Team

FreeBSD, Google Cloud, and Dual ECC/RSA 

Currently, Abdorrahman is busy with daily
Let's Encrypt Certificates 26

administration tasks and CoreBOX development
Bob Cromwell

which are getting harder and more intense.
Bob Cromwell deployed a website to FreeBSD on
the Google Cloud Platform. He set up HTTPS with
Interview with Oleksandr Tymoshenko 54

free Let's Encrypt (letsencrypt.org) TLS certificates
Ewa & The BSD Team

for both RSA and ECC, and set up automatic
Oleksandr is a software developer with more than 15
renewal of the dual certificates. None of this is
years of experience. He worked on a number of
difficult, but he discovered that some steps aren't
projects in various fields including Linux PDA
openly supported or well documented. Specifically,
software, SMS center for GSM telco, servers for
running FreeBSD on IaaS or Infrastructure as a
multiplayer games, IP PBX box, and firmware for
Service cloud environment, and automatically
VoIP phones.
renewing dual RSA/ECC Let's Encrypt certificates.
COLUMN
Mongoose Embedded Web Server on FreeBSD 38

Abdorrahman Homaei
 On October 1st, the Network Enforcement Act
Internet of things (IoT) is getting more popular, so took effect in Germany. This creates a legal
maybe, FreeBSD and Mongoose would be a wise framework for censorship of the Internet. As
choice. With FreeBSD and Mongoose, you can run a
more and more governments take the hammer of
full-fledged, fast and minimal web server.
censorship to content, what are the ramifications
Additionally, you can run Mongoose on
for free speech. More importantly, has the
non-embedded devices. For example, “corebox.ir” is
Internet come of age? 58

based on mongoose web server.
Rob Somerville


5
TOOL REVIEW

Web Development Forensics with


BugReplay
reviewed by David Carlier

Every web developer has his set of tools for The BugReplay extension works basically in two
debugging a web application. The most common modes. Snapshot mode takes a static picture of the
developer tools are Firebug for Firefox, Chrome’s current page to be able to spot, for example,
Developer Tools, and Internet Explorer’s Developer misplaced HTML elements. Video mode records for
Toolbar. All of these can be used to inspect HTML a couple of seconds to be able to reproduce a
elements, Javascript, CSS styles, network traffic, specific scenario as explained above with those two
and Ajax calls. These tools are great, but have one explicit icons.
downside: all of these data is produced in real time
but not recorded. But what if you want to spot a
specific bug scenario to study and to replay it at
will? To display to the rest of the development team,
your client, etc. This can be useful especially if the
bug is triggered only in a very specific case such as
a web form filled with invalid data, triggering an Ajax
call which takes longer than it should, triggering a
bug in client-side Javascript. We can go on a long
list of possible use cases when this type of tool. To
use BugReplay, you’ll need the
Chrome/Chromium/Iridium browser; you’ll also need
to download the extension and register an account
on http://www.bugreplay.com.

6
As you can see in the last illustration, there is a blue One warning: the data that is collected is stored in
icon to allow us to report or to give immediate BugReplay's cloud. Because of this, you will need to
feedback to BugReplay team within reasonable work decide if this is appropriate from your particular
hours. Once the snapshot is taken, you can apply application. Apart of this, it gets the job done as
various modifications such as cropping, resizing, and promised.
putting a comment and drawing on top of it to
highlight the problem, useful even for
non-developers individuals.

Then, there is the video mode which completes just


as well as the snapshot mode,

About BugReplay
BugReplay, a provider of an innovative set of web
browser tools that make reporting bugs faster and
fixing them easier, today announced the availability
of its flagship product of the same name as an
add-on for the Firefox web browser. A screencast
and network debugging tool for web developers and
internal software testers, BugReplay enables users
to quickly and accurately submit detailed bug
reports about web applications. By creating a
synchronized screen recording of a user’s actions,
network traffic, JavaScript logs and other key
environmental data, BugReplay reduces the time to
complete the task of bug reporting of up to an hour
or more to less than a minute.

Founded in 2015, BugReplay is a leading provider of


an innovative set of web browser tools that make
reporting bugs faster and fixing them easier. Its
mission is to develop easy-to-use tools for
diagnosing and repairing issues with web
applications. Based in New York City, BugReplay’s
offerings include: BugReplay, a screencast and
network debugging tool for web developers and
where also network and Javascript traffic are internal software testers; and Feedback by
recorded (seemingly not available in the trial), video BugReplay, a reporting tool for website users to
relatively immediately available after post processing submit bug reports to customer support teams. For
to us. Nevertheless, this is a set of tools which can more information, visit http://www.bugreplay.com
have its place as we can realise through the trial; and follow on Twitter @BugReplay.

7
FREEBSD

OpenLDAP Directory Services in FreeBSD (I).


Dynamic Configuration Fundamentals

What you will learn:


• Installation and configuration methods for OpenLDAP 2.4 under FreeBSD
• Basic foundations of the new LDAP on-line configuration (OLC)
• Hardening LDAPv3 with SASL and TLS/SSL protocols
• Embedding of NIS+/YP into an LDAP server to provide centralized NIS+ support for UNIX computers
• Administration and basic tuning principles for LDAP servers

What you should already know:


• Intermediate UNIX OS background as end-user and administrator
• Some knowledge of UNIX authentication systems and NIS+/YP
• Experience with FreeBSD system package and FreeBSD ports
• Good taste for command-line usage

The main objective of this article is to introduce hierarchical data. In comparison to traditional
directory services managed under the LDAP full-service standalone relational databases
protocol, and to illustrate a new configuration management systems, the number of LDAP read
approach known as Online LDAP Configuration operations exceed write operations. The read
(OCL) which was introduced in OpenLDAP v2.3. We operations are mainly searches of special data which
will also present a direct application to encapsulating follow the patterns described by RFC 1558. For this
a NIS+/YP centralized user authentication and reason, the standard RDBMS approach is
management schema for an arbitrary number of abandoned in favor of key-value databases used as
servers and clients connected to a TCP/IP network. a backend.
Additionally, we’ll show a web-based administration
tool that will make administering the OpenLDAP LDAPv3 introduces some key improvements over its
server easier. predecessor LDAPv2 such as:

UTF-8 Internationalization Support for foreign


An Overview of Directory languages

Services Enhanced Security Mechanisms such as SASL for


authentication
Directory services are a special type of database
storage systems focused on heterogeneous,

8
In LDAP, authentication is required after the initial Updating FreeBSD repository catalogue...

"bind" operation. LDAPv3 supports three types of FreeBSD repository is up to date.


authentication: anonymous, simple, and SASL
authentication. A client that sends an LDAP request All repositories are up to date.

without doing a "bind" is treated as an anonymous Checking for upgrades (1 candidates): 100%
client.
Processing candidates (1 candidates): 100%
Simple authentication consists of sending the LDAP
Checking integrity... done (0 conflicting)
server the fully qualified Distinguished Name (DN)
and a clear-text password of the client. The security Your packages are up to date.
weakness with this mechanism is that the password
can be read by anyone who can access the network. Following the update of pkg(1), we will need to install
You can use a simple authentication mechanism the ports tree to install OpenLDAP+SASL as official
within an encrypted channel (such as SSL) to avert pre-built binaries for OpenLDAP. DO NOT include
exposing the password in this manner provided that SASL support. If you do not require SASL, you can
it is supported by the LDAP server. install using the standard `pkg install` routine vs. the
ports. To install the ports tree:
Finally, SASL is the Simple Authentication and
root@laertes:~ # portsnap fetch extract
Security Layer described by RFC 2222. It specifies a
challenge-response protocol in which data is Looking up portsnap.FreeBSD.org mirrors... 6
exchanged between the client and the server for mirrors found.

authentication and establishment of a security layer Fetching public key from


on which to carry out subsequent communication. ec2-eu-west-1.portsnap.freebsd.org... done.
By using SASL, LDAP can support any
Fetching snapshot tag from
authentication by a secured negotiation between the ec2-eu-west-1.portsnap.freebsd.org... done.
LDAP client and server.
Fetching snapshot metadata... done.

Getting Started with OpenLDAP Fetching snapshot generated at Tue Oct 10 02:00:56
CEST 2017:

Installation Procedure

One obvious and important requirement is an /usr/ports/x11/yeahconsole/


updated and running installation of FreeBSD OS,
/usr/ports/x11/yelp/
which at the time of writing was FreeBSD 11.1.
/usr/ports/x11/zenity/
It is possible to take advantage of virtualization
technologies to simplify the installation and testing Building new INDEX files... done.

process so long as it has a functional internet


The download and extraction of portsnaps tarball
connection to perform package downloads or a
reachable local repository mirror. takes a while, a good time for coffee or your
preferred beverage.
First, let us ensure we are up to date with the current
patches and fixes in BASE by running: Once portsnap has finished its work, install the
portmaster utility as a pre-built pkg and continue
root@laertes:~ # /usr/sbin/freebsd-update fetch with building our packages:
root@laertes:~ # /usr/sbin/freebsd-update install root@laertes:~ # pkg install portmaster

Next, ensure that our package database is up to Updating FreeBSD repository catalogue...
date and pkgng-ready by running:
Fetching meta.txz: 100% 940 B 0.9kB/s 00:01
root@laertes:~ # pkg upgrade

9
Fetching packagesite.txz: 100% 6 MiB 175.1kB/s root@laertes:/usr/ports# portmaster
00:35 net/openldap24-server

Processing entries: 100%

FreeBSD repository update completed. 26972 packages


processed.

All repositories are up to date.

The following 1 package(s) will be affected (of 0


checked):

New packages to be INSTALLED:

portmaster: 3.17.10

Number of packages to be installed: 1

42 KiB to be downloaded.

Proceed with this action? [y/N]: y Figure 1: Portmaster OpenLDAP Sever Options Dialog

[1/1] Fetching portmaster-3.17.10.txz: 100% 42 Additionally, you can choose NLS support for the
KiB 42.6kB/s 00:01
standard GSSAPI_BASE which is enough for our
Checking integrity... done (0 conflicting) purposes; as such we need neither HEIMDAL nor
MIT support for GSSAPI. The `portmaster` utility will
[1/1] Installing portmaster-3.17.10...
attempt to resolve all of the required dependencies
Extracting portmaster-3.17.10: 100% automatically.

===>>> The following actions will be taken if you


Although it is not strictly necessary, ensure there is
choose to proceed:
consistency with previous FreeBSD releases by
means of the following command: Install net/openldap24-server

root@laertes:~ # pkg2ng Install devel/icu

Converting packages from /var/db/pkg Install devel/gmake

Analysing shared libraries, this will take a Install security/cyrus-sasl2-gssapi


while...
Install devel/libtool
Checking all packages: 100%
Install print/texinfo

Once the ports package system has been


Install devel/gettext-tools
successfully installed and updated, switch to the
/usr/ports/ directory. The port we want to install is ! Install misc/help2man
OpenLDAP 2.4 server, and is available as
net/openldap24-server/ in the ports directory. Install devel/p5-Locale-gettext

Remember to install the OpenLDAP server by ===>>> Proceed? y/n [y]


compiling the sources with the following options
selected prior to starting the compilation process …

GSSAPI, PPOLICY, MEMBEROF, DYNLIST, when completed, portmaster will report:


DYNGROUP, REFINT, SHA2, SASL, and UNIQUE
The OpenLDAP server package has been successfully
during the openldap24-server port configuration
installed.
shown in the dialog by Illustration 1
In order to run the LDAP server, you need to edit
root@laertes:~# cd /usr/ports
/usr/local/etc/openldap/slapd.conf

10
to suit your needs and add the following lines to In addition, select the options available at the dialog
/etc/rc.conf:
in Illustration 2, including UCS4 Unicode Support,
slapd_enable="YES" NLS, and GSSAPI_BASE option. A long set of
packages will be installed to meet all dependencies
slapd_flags='-h
"ldapi://%2fvar%2frun%2fopenldap%2fldapi/ after a while.
ldap://0.0.0.0/"'

slapd_sockets="/var/run/openldap/ldapi"

Then start the server with

/usr/local/etc/rc.d/slapd start

or reboot.

Try `man slapd' and the online manual at

http://www.OpenLDAP.org/doc/

for more information.

slapd runs under a non-privileged user id (by


default `ldap'),

see /usr/local/etc/rc.d/slapd for more information. Figure 2: GNUTLS Package Portmaster Installation Dialog

*************************************************** Notice that gnutls installs symlinks to support root


*********
certificate discovery by default for software that uses
===>>> Done displaying pkg-message files OpenSSL, thereby enabling SSL Certificate
Verification by client software without manual
===>>> The following actions were performed:
intervention. Nevertheless, if you can replace the
Installation of devel/gmake (gmake-4.2.1_1) following symlinks with either an empty file or your
site-local certificate bundle if you prefer to do this
Installation of devel/icu (icu-59.1,1)
manually.
Installation of devel/gettext-tools
(gettext-tools-0.19.8.1) /etc/ssl/cert.pem

Installation of devel/p5-Locale-gettext /usr/local/etc/ssl/cert.pem


(p5-Locale-gettext-1.07)

Installation of misc/help2man /usr/local/openssl/cert.pem


(help2man-1.47.5)
Gnutls utils will be used later on to check some TLS
Installation of print/texinfo (texinfo-6.5,1)
features incorporated into our OpenLDAP server.
Installation of devel/libtool (libtool-2.4.6)
TLS/SSL Support
Installation of security/cyrus-sasl2-gssapi
(cyrus-sasl-gssapi-2.1.26_7)
Alternatively, it is possible to use OpenSSL instead
Installation of net/openldap24-server of GNUtls. Generally, it is not a good idea to mix
(openldap-sasl-server-2.4.45_2)
packages and ports. Let’s use the portmaster
method as above.
It is recommended to install the gnutls package to
enable security features such as TLS/SSL: root@laertes:~# cd /usr/ports

root@laertes:/usr/ports# portmaster root@laertes:/usr/ports# portmaster


security/openssl
security/gnutls

11
Figure 3: OpenSSL portsmaster configuration dialog

Next, we will generate a new SSL key and prepare a slapd_enable="YES"

certificate signing request (CSR) file to be sent to a slapd_flags='-h


Certification Authority (CA) of your choice for "ldapi://%2fvar%2frun%2fopenldap%2fldapi/
signature: ldap://0.0.0.0/"'

slapd_sockets="/var/run/openldap/ldapi"
root@laertes~# cd /usr/local/etc

slapd_cn_config=”YES”
root@laertes~# openssl req -sha512 -out
ldap.example.com.csr -new -newkey rsa:4096 -nodes
-keyout ldap.example.com.key Please note that by default, the
/usr/local/etc/rc.d/slapd script starts slapd(8) using
Afterward, we will also need to generate the required only the static configuration file slapd.conf instead of
Diffie-Helmann (DH) parameters file: our OLC-based configuration at
root@laertes~# openssl dhparam -out
/usr/local/etc/openldap/slapd.d/ directory. Due to
/usr/local/etc/dhparam.pem 4096 this, you must take care not to forget to add the
corresponding entry for slapd_cn_config to
Once all of the required packages have been /etc/rc.conf.
successfully installed, it is time to start with the
preliminary analysis of LDAP databases architecture. If you want to use LDAP/S, modify the slapd_flags
Before starting our OpenLDAP server, edit line above by adding a ldaps:/// URI as shown
/etc/rc.conf file and add the following entries to below:
enable OpenLDAP on our FreeBSD system:

12
slapd_flags='-h # The database directory MUST exist prior to
"ldapi://%2fvar%2frun%2fopenldap%2fldapi/ ldap:/// running slapd AND
ldaps:///"'
# should only be accessible by the slapd and slap
The OpenLDAP server daemon slapd(8) defaults to tools.

looking for a text configuration file named slapd.conf # Mode 700 recommended.
placed in /usr/local/etc/openldap/. This file also
needs to be updated to select whichever database directory /var/db/openldap-data

backend you would like to use for your data, and to # Indices to maintain
make use of the `mdb` backend as we have, find and
uncomment the following entry: index objectClass eq

olcDbIndex: uidNumber eq
moduleload back_mdb
olcDbIndex: uniqueMember eq
Now, to make use of SASL authentication for LDAP,
you must generate a root password by using the olcDbIndex: gidNumber eq

slappasswd(1) command: olcDbIndex: cn eq

root@laertes:/usr/ports # slappasswd -h "{SSHA}" olcDbIndex: memberUid eq

New password:
Also use the script to start the slapd(8) daemon:
Re-enter new password:
root@laertes:~ # /usr/local/etc/rc.d/slapd start
{SSHA}bNTVGLTAytavPx55XTSE2dEs2j10An18
Starting slapd.
This password shall be included at the end of
slapd.conf file: Now, the OpenLDAP server becomes active and
ready to be populated with our data. However,
root@laertes:# echo “rootpw
before starting with it, let us take some time to
{SSHA}bNTVGLTAytavPx55XTSE2dEs2j10An18” >>
/usr/local/etc/openldap/slapd.conf examine LDAP taxonomy to introduce the new
approach for configuring LDAP servers: the
and defining the BaseDN and the RootDN to be OpenLDAP Online Configuration (OCL).
used later on to deploy and administer LDAP
directories: LDAP Configuration
###################################################
A more complex solution to handle NIS+ based upon
# MDB database definitions LDAP servers requires modifying the slapd.conf file
as follows:
###################################################
include /usr/local/etc/openldap/schema/core.schema
database mdb
include
maxsize 1073741824
/usr/local/etc/openldap/schema/cosine.schema

suffix "dc=cae-hpc,dc=org"
include /usr/local/etc/openldap/schema/corba.schema

rootdn "cn=admin,dc=cae-hpc,dc=org"
include
/usr/local/etc/openldap/schema/inetorgperson.schema
# Cleartext passwords, especially for the rootdn,
should
include /usr/local/etc/openldap/schema/nis.schema

# be avoid. See slappasswd(8) and slapd.conf(5)


include
for details.
/usr/local/etc/openldap/schema/collective.schema

# Use of strong authentication encouraged.


include
/usr/local/etc/openldap/schema/openldap.schema
rootpw secret

13
include index uid eq
/usr/local/etc/openldap/schema/duaconf.schema
index uidNumber eq
include
/usr/local/etc/openldap/schema/dyngroup.schema index uniqueMember eq

include /usr/local/etc/openldap/schema/misc.schema index gidNumber eq

include /usr/local/etc/openldap/schema/pmi.schema index cn eq

include index memberUid eq


/usr/local/etc/openldap/schema/ppolicy.schema
rootpw {SSHA}A6ia1SPQlY4J5qWBUkPg1qqiwZHrL0mb
pidfile /var/run/openldap/slapd.pid
overlay memberof
argsfile /var/run/openldap/slapd.args
memberof-dangling drop
logfile /var/log/slapd.log
memberof-refint TRUE
loglevel 256
Traditionally, text files have been the way of setting
modulepath /usr/local/libexec/openldap
up configuration for server daemons in Unix world.
moduleload back_mdb However, OpenLDAP 2.4 introduces a new way to
perform this configuration using Online
disallow bind_anon
Configuration. Online Configuration uses an existing
require authc LDAP database to store these settings. The
instrument used are special text files named LDAP
database mdb
Interchange Format (LDIF) files, and this new
suffix "dc=example,dc=org" procedure of configuring and defining everything in
OpenLDAP servers is known as OpenLDAP Online
rootdn "cn=admin,dc=example,dc=org"
Configuration (OCL) method.
directory /var/db/openldap-data

maxsize 1073741824
OpenLDAP Online Configuration
(OCL)

access to attrs=userPassword Conventionally, OpenLDAP was configured using


text files in a static way. In this case, slapd.conf file
by self write
was used by default. Beside this method of
by anonymous auth configuring, OpenLDAP 2.3 and later releases also
support a new dynamic and online approach of
by dn.base="cn=admin,dc=example,dc=org"
configuring LDAP known as On-Line Configuration
write
(OLC). OLC will be used in this article.
by * none
OCL represents OpenLDAP server configuration as a
access to *
tree (DIT) whose rootDN is the entity named
by self write cn=config. A more detailed picture of LDAPv3
organisation using Online Configuration (OLC) is
by dn.base="cn=admin,dc=example,dc=org"
write
depicted in Illustration 4.

by * read

# Indices to maintain

index objectClass eq

14
# Log level

olcLogLevel: -1

# Do not enable referrals until AFTER you have a


working directory

# service AND an understanding of referrals.

#olcReferral: ldap://root.openldap.org

# Sample security restrictions

# Require integrity protection (prevent


hijacking)
Figure 4: OpenLDAP cn=config DIT Hierarchy
# Require 112-bit (3DES or better) encryption
LDAP servers store information in hierarchical for updates
structures named Directory Information Trees (DIT).
# Require 64-bit encryption for simple bind
Additionally, in contrast to the former text-based
configuration approach, OpenLDAP v2.4 #olcSecurity: ssf=1 update_ssf=112 simple_bind=64
incorporates OpenLDAP Online Configuration (OCL)
which is slightly different from the former method. b) Part II. Dynamic Modules load

All DITs are held by a super-structure named Root LMDB backend does not use caching. Moreover, it
DSE. DSE stands for DSA Specific Entry and it acts does not have special tuning needs to achieve a
as a management or control entity. good performance, apart from index rebuilding.
Therefore, LMDB is the recommended primary
Recall the slapd.conf file we wrote in the previous backend to replace the Berkeley DB backend, and,
section, its alternative OCL formulation is as shown we shall select it for our OpenLDAP server
below: configuration.

a) Part I. Global cn=config configuration #

# Load dynamic backend modules:


dn: cn=config

#
objectClass: olcGlobal

dn: cn=module,cn=config
cn: config

objectClass: olcModuleList
#

cn: module
#

olcModulepath: /usr/local/libexec/openldap
# Define global ACLs to disable default read
access.
#olcModuleload: back_bdb.la
#
#olcModuleload: back_hdb.la
olcArgsFile: /var/db/run/slapd.args
#olcModuleload: back_ldap.la
olcPidFile: /var/db/run/slapd.pid
#olcModuleload: back_passwd.la
#
#olcModuleload: back_shell.la

15
olcModuleload: back_mdb.la objectClass: olcMdbConfig

c) Part III. Schema load olcDatabase: mdb

dn: cn=schema,cn=config olcSuffix: dc=bsd-online,dc=org

objectClass: olcSchemaConfig olcRootDN: cn=admin,dc=bsd-online,dc=org

cn: schema # Cleartext passwords, especially for the rootdn,


should
include:
file:///usr/local/etc/openldap/schema/core.ldif # be avoided. See slappasswd(8) and
slapd-config(5) for details.
include:
file:///usr/local/etc/openldap/schema/cosine.ldif # Use of strong authentication encouraged.

include: olcRootPW: {SSHA}jCgbLiQs8v9kYwKpLAI6oiHPI8ZZwzca


file:///usr/local/etc/openldap/schema/inetorgperson
.ldif # The database directory MUST exist prior to
running slapd AND
d) Part IV. Frontend Database
# should only be accessible by the slapd and slap
dn: olcDatabase=frontend,cn=config tools.

objectClass: olcDatabaseConfig # Mode 700 recommended.

objectClass: olcFrontendConfig olcDbDirectory: /var/db/openldap-data

olcDatabase: frontend # Indices to maintain

# olcDbIndex: objectClass eq

# Sample global access control policy: olcDbIndex: default pres,eq

# Root DSE: allow anyone to read it olcDbIndex: uid

# Subschema (sub)entry DSE: allow anyone to olcDbIndex: cn,sn pres,eq,sub


read it
The five parts of the listings above may be joined in
# Other DSEs:
a single LDIF file paying attention to separate all DN
# Allow self write access using empty lines.

# Allow authenticated users read Eventually, to perform the initial load of slapd(8)
access
daemon, just create a new directory to store its
# Allow anonymous users to databases and start loading the previous LDIF file:
authenticate
root@laertes:~ # mkdir
# /usr/local/etc/openldap/slapd.d/

#olcAccess: to * by * read root@laertes:~ # slapadd -F


/usr/local/etc/openldap/slapd.d/ -n 0 -l slapd.ldif
olcAccess: to dn.base="" by * read
As a result, the directory slapd.d is populated with a
olcAccess: to dn.base="cn=Subschema" by * read
subdirectories tree starting with cn=config/ and its
olcAccess: to * by self write by users read by associated LDIF file:
anonymous auth
root@laertes:/usr/local/etc/openldap/slapd.d # ls
e) Part V. MDB Database Backend -R

dn: olcDatabase=mdb,cn=config cn=config cn=config.ldif

objectClass: olcDatabaseConfig

16
./cn=config: diving into the whole tree hierarchy as it will be seen
cn=module{0}.ldif cn=schema.ldif in the following examples:
olcDatabase={0}config.ldif
a) Root DSE Entry
cn=schema olcDatabase={-1}frontend.ldif
olcDatabase={1}mdb.ldif dn:

./cn=config/cn=schema: structuralObjectClass: OpenLDAProotDSE

configContext: cn=config
cn={0}core.ldif cn={1}cosine.ldif
cn={2}inetorgperson.ldif namingContexts: dc=bsd-online,dc=org

supportedControl: 1.3.6.1.4.1.4203.1.9.1.1
In the event an error occurs, it is possible to clean up
all OpenLDAP configuration and start from scratch supportedControl: 2.16.840.1.113730.3.4.18
by executing the commands below: supportedControl: 2.16.840.1.113730.3.4.2

# cd /usr/local/etc/openldap/ supportedControl: 1.3.6.1.4.1.4203.1.10.1

supportedControl: 1.3.6.1.1.22
# rm -fr slapd.d/*
supportedControl: 1.2.840.113556.1.4.319
# ../rc.d/slapd stop
supportedControl: 1.2.826.0.1.3344810.2.3
# rm -fr /var/db/openldap-data/*
supportedControl: 1.3.6.1.1.13.2
.# ./rc.d/slapd start supportedControl: 1.3.6.1.1.13.1

supportedControl: 1.3.6.1.1.12
However, it is possible to convert a static
slapd.conf file to the alternative OCL supportedExtension: 1.3.6.1.4.1.4203.1.11.1

configuration directories tree using the slaptest(8) supportedExtension: 1.3.6.1.4.1.4203.1.11.3


tool:
supportedExtension: 1.3.6.1.1.8

root@laertes:/usr/local/etc/openldap# slaptest -f supportedFeatures: 1.3.6.1.1.14


slapd.conf -F slap.conf.d/
supportedFeatures: 1.3.6.1.4.1.4203.1.5.1

For a more-in-depth view on slapd(8) configuration supportedFeatures: 1.3.6.1.4.1.4203.1.5.2


using OCL, check the slapd-config(5) manual page
supportedFeatures: 1.3.6.1.4.1.4203.1.5.3
for an accurate description of the existing backends.
supportedFeatures: 1.3.6.1.4.1.4203.1.5.4

Navigating across LDAP Server supportedFeatures: 1.3.6.1.4.1.4203.1.5.5

Hierarchy. Search Patterns supportedLDAPVersion: 3

supportedSASLMechanisms: SCRAM-SHA-1
According to RFC 1558, there are three types of supportedSASLMechanisms: GSSAPI
search scopes set up in the tree hierarchy defined in
supportedSASLMechanisms: GSS-SPNEGO
every LDAP server that are used by ldapsearch(1)
command: supportedSASLMechanisms: DIGEST-MD5

supportedSASLMechanisms: CRAM-MD5
• Base
supportedSASLMechanisms: NTLM

• One-Level entryDN:

• Sub-Tree subschemaSubentry: cn=Subschema

Your selection will depend on whether you are diving b) Querying DITs managed by LDAP
in the top-level tree node (Base), descent to the first
The base entry of each DIT is available through the
immediate level of the tree hierarchy (One-Level) or
namingContexts attribute:

17
root@laertes:~ # ldapsearch -H ldap:// -x -s base olcObjectIdentifier: OLcfgOc OLcfg:4
-b "" -LLL "namingContexts"
olcObjectIdentifier: OLcfgGlOc OLcfgOc:0
dn:
olcObjectIdentifier: OLcfgBkOc OLcfgOc:1
namingContexts: dc=bsd-online,dc=org
olcObjectIdentifier: OLcfgDbOc OLcfgOc:2
c) Querying DITs used for LDAP Configuration
olcObjectIdentifier: OLcfgOvOc OLcfgOc:3
root@laertes:~ # ldapsearch -H ldap:// -x -s base
-b "" -LLL "ConfigContext" olcObjectIdentifier: OLcfgCtOc OLcfgOc:4

dn: olcObjectIdentifier: OMsyn


1.3.6.1.4.1.1466.115.121.1
configContext: cn=config
..............................
d) Accessing Configuration DITs
objectClass: olcDatabaseConfig

To see all the contents of the main configuration DIT objectClass: olcFrontendConfig
and schemas loaded, run the ldapsearch command
as shown below: olcDatabase: {-1}frontend

root@laertes:~ # ldapsearch -Y EXTERNAL -H ldap:/// dn: olcDatabase={0}config,cn=config


-b cn=config
objectClass: olcDatabaseConfig
dn: cn=config
olcDatabase: {0}config
objectClass: olcGlobal
olcAccess: {0}to * by
cn: config dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=ext
ernal
olcArgsFile: /var/run/openldap/slapd.args
,cn=auth" manage by * none
olcPidFile: /var/run/openldap/slapd.pid
dn: olcDatabase={1}monitor,cn=config
olcTLSCACertificatePath: /etc/openldap/certs
objectClass: olcDatabaseConfig
olcTLSCertificateFile: /etc/openldap/certs/cert.pem
olcDatabase: {1}monitor
olcTLSCertificateKeyFile:
/etc/openldap/certs/priv.pem olcAccess: {0}to * by
dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=ext
ernal
olcLogLevel: -1

,cn=auth" read by
dn: cn=schema,cn=config
dn.base="cn=admin,dc=bsd-online,dc=org" read by *
none
objectClass: olcSchemaConfig

dn: olcDatabase={2}hdb,cn=config
cn: schema

objectClass: olcDatabaseConfig
olcObjectIdentifier: OLcfg 1.3.6.1.4.1.4203.1.12.2

objectClass: olcHdbConfig
olcObjectIdentifier: OLcfgAt OLcfg:3

olcDatabase: {2}hdb
olcObjectIdentifier: OLcfgGlAt OLcfgAt:0

olcDbDirectory: /var/lib/ldap
olcObjectIdentifier: OLcfgBkAt OLcfgAt:1

olcDbIndex: objectClass eq,pres


olcObjectIdentifier: OLcfgDbAt OLcfgAt:2

olcDbIndex: ou,cn,mail,surname,givenname
olcObjectIdentifier: OLcfgOvAt OLcfgAt:3
eq,pres,sub

olcObjectIdentifier: OLcfgCtAt OLcfgAt:4


olcSuffix: dc=bsd-online,dc=org

18
olcRootDN: cn=admin,dc=bsd-online,dc=org GnuTLS debug client 3.3.24

olcRootPW: {SSHA}wd1zFsTyEDMtFqvzPahzSzVU0bOKicIN Checking localhost:636

A short look at the DN managed by DIT unknown protocol ldaps

Configuration Entry may be displayed here: for SSL 3.0 (RFC6101)


support... yes
root@laertes:~# ldapsearch -H ldap:// -Y EXTERNAL
-b "cn=config" -LLL -Q dn whether we need to disable
TLS 1.2... no
dn: cn=config
whether we need to disable
dn: cn=module{0},cn=config TLS 1.1... no

dn: cn=schema,cn=config whether we need to disable


TLS 1.0... no
dn: cn={0}core,cn=schema,cn=config
whether %NO_EXTENSIONS is
dn: cn={1}cosine,cn=schema,cn=config required... no

dn: cn={2}nis,cn=schema,cn=config whether %COMPAT is


required... no
dn: cn={3}inetorgperson,cn=schema,cn=config
for TLS 1.0 (RFC2246)
dn: olcBackend={0}mdb,cn=config support... yes

dn: olcDatabase={-1}frontend,cn=config for TLS 1.1 (RFC4346)


support... yes
dn: olcDatabase={0}config,cn=config
for TLS 1.2 (RFC5246)
dn: olcDatabase={1}mdb,cn=config support... yes

And the most important thing, where the slapd(8) for certificate
chain order... sorted
server information is stored:
for safe renegotiation (RFC5746)
root@laertes2:~# ldapsearch -H ldapi:// -Y EXTERNAL support... yes
-b "cn=config" -LLL -Q -s base
for Safe renegotiation support
dn: cn=config (SCSV)... yes

objectClass: olcGlobal for heartbeat (RFC6520)


support... no
cn: config
for version rollback bug in
olcArgsFile: /var/run/openldap/slapd.args RSA PMS... dunno

olcPidFile: /var/run/openldap/slapd.pid for version rollback bug in


Client Hello... no
olcTLSCACertificatePath: /etc/openldap/certs
whether the server ignores the RSA PMS
olcTLSCertificateFile: /etc/openldap/certs/cert.pem version... yes

olcTLSCertificateKeyFile: whether small records (512 bytes) are


/etc/openldap/certs/priv.pem accepted... yes

olcLogLevel: -1 whether cipher suites not in SSL 3.0 spec are


accepted... yes
For advanced users, to verify the current status of
whether a bogus TLS record version in the client
LDAP SSL/TLS server, GNU TLS provides an easy
hello is accepted... yes
way to check if everything is all right:
whether the server understands TLS closure
root@laertes:~# gnutls-cli-debug -p 389 localhost alerts... partially

19
whether the server supports session Host is up (690s latency).
resumption... yes
Other addresses for localhost (not scanned):
for anonymous authentication 127.0.0.1
support... no
PORT STATE SERVICE
for ephemeral Diffie-Hellman
support... yes 636/tcp open ldapssl

for ephemeral EC Diffie-Hellman | ssl-enum-ciphers:


support... yes
| TLSv1.2:
ephemeral EC Diffie-Hellman
group info... SECP256R1 | ciphers:

for AES-128-GCM cipher (RFC5288) | TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA - strong


support... yes
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA - strong
for AES-128-CBC cipher (RFC3268)
support... yes | TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 -
strong
for CAMELLIA-128-GCM cipher (RFC6367)
support... no | TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 -
strong
for CAMELLIA-128-CBC cipher (RFC5932)
support... no | TLS_DHE_RSA_WITH_AES_256_CBC_SHA - strong

for 3DES-CBC cipher (RFC2246) | TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 -


support... yes strong

for ARCFOUR 128 cipher (RFC2246) | TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 –


support... yes strong

for MD5 MAC


At this point, the configuration of our OpenLDAP
support... yes
server running on FreeBSD is complete, and it is
for SHA1 MAC possible to perform some queries to check the
support... yes
accuracy of its contents
for SHA256 MAC
support... yes Conclusions and Remarks
for ZLIB compression
support... no While operating an OpenLDAP server can seem
tricky at first, getting to know the configuration DIT
for max record size (RFC6066)
support... no
and how to find meta-data within the system can
help you hit the ground running. Modifying the
for OCSP status response (RFC6066) cn=config DIT with LDIF files can immediately affect
support... no
the running system. Likewise, configuring the system
for OpenPGP authentication (RFC6091) via a DIT allows you to potentially set up remote
support... no administration using only LDAP tools. This means
that you can separate LDAP administration from
Alternatively, from another remote computer with
server administration. Directory Services are a
network access to our LDAP server:
common mean to provide centralised management
root@laertes2:/etc/default# nmap -Pn -p T:636 for an heterogeneous environment in which coexist
--script ssl-enum-ciphers localhost different platform architectures and operating
Starting Nmap 6.40 ( http://nmap.org ) at
systems, which makes this subject specially relevant
2017-10-03 12:12 CEST for corporate applications. Furthernore, most of the
activities described in this part may be applied to
Nmap scan report for localhost (127.0.0.1)
other Unix-like systems in a straightforward way.

20
For this reason, the second part of this serie will
focus in one of the most typical application of LDAP Meet the Author
which is the embedding of NIS+ tables onto a
José B. Alós has developed an important part of
directory structure using FreeBSD.
his professional career since 1999 as an EDS
Acronyms and Abbreviations employee, as a UNIX System Administrator,
mainly focused on SunOS/Solaris, BSD and
DSA Directory Specific Agent GNU/Linux and High-Availability solutions for
DIT Directory Information Tree industry, communications services and banking.
DSE DSA Specific Entry
In 2007 he joined EADS Defense and Security, as
OCL OpenLDAP Online Configuration the person responsible for providing support for
DN Distinguished Name end-users in aircraft engineering departments for
RDN Relative Distinguished Name long-term projects. These days his professional
career has moved to High-Performance
LDIF LDAP Data Interchange Format
Computing and Simulation area within Airbus
LDAP Lightweight Directory Access Protocol
Group.
NSS Name Service Switch
PAM Pluggable Authentication Modules He was also Assistant Professor in the
Universidad de Zaragoza (Spain), and his
SSL Secure Sockets Layer
academic background includes a PhD in Nuclear
TLS Transport Layer Security
Engineering and three MsC in Electrical and
SASL Simple Authentication and Security Layer Mechanical Engineering, Theoretical Physics and
OID Object Identifier Applied Mathematics.
MDB Memory-Mapped Database
LMDB Lighting Memory-Mapped Databases
YP Yellow Pages

References and Bibliography

http://www.nlc-bnc.ca/publications/1/p1-244-e.html Directories and X.500: An Introduction


Timothy A. Understanding and Deploying LDAP Directory Services
Howes, Gordon S. Good, Mark Smith; Macmillan
Publishing, USA

https://tools.ietf.org/search/rfc1558 RFC1558. A String Representation of LDAP Search FIlters


https://tools.ietf.org/search/rfc2222 RFC 2222. Simple Authentication and Security Layer
(SASL)
https://tools.ietf.org/search/rfc6101 RFC6101. The Secure Sockets Layer (SSL) Protocol
Version 3.0
http://www.openldap.org/ OpenLDAP Home Page
http://www.penldap.org/docs/admin24/ OpenLDAP 2.4 Administration's Guide
http://phpldapadmin.sourceforge.net PhpLDAPadmin Home Page
h t t p s : / / w w w. f r e e b s d . o r g / r e l e a s e s / 1 1 . 1 R / FreeBSD 11.1 Release Home Page
announce.html
https://www.digitalocean.com/community/tutorials/ Howto Install and Manage Ports of FreeBSD 10.1
how-to-install-and-manage-ports-on-freebsd-10-1

21
FREEBSD

Fluentd For Centralized


Logging II: Fluentd-UI and
Suricata IDS
In previous issue of BSD Magazine (Vol 11 No 06), I after we finish with installation and configuration. I’m
explained how to install Fluentd from source and using Suricata version 4.0.0 from ports. Your
from ports. I gave simple example how to configure Suricata and oinkmaster configuration is located in
it and in this article, I’ll explain in detail how to /usr/local/etc/suricata and /usr/local/etc/oinkmaster,
collect our logs in the server then forward and respectively. You need oinkmaster to update the
process these logs so we can have a better and rules from Suricata. We need to configure
meaningful information. I’m collecting logs from oinkmaster before we can start Suricata.
Suricata (https://suricata-ids.org), which is a
signature-based Intrusion Detection System (IDS) $ cp
like Snort. If you’d like to know more about Suricata, /usr/local/etc/oinkmaster.conf.sample
check out its web site. /usr/local/etc/oinkmaster.conf

For this experiment, I’m using FreeBSD Open file /usr/local/etc/oinkmaster.conf and add this
10.4-RELEASE with 4 GB memory. First prepare the configuration.
Fluentd installation and next is install Suricata and
oinkmaster with ports as follows. Oinkmaster is a url =
tool for updating rules and signatures for Suricata. http://rules.emergingthreats.net/open/s
uricata/emerging.rules.tar.gz
$ cd /usr/ports/security/suricata

$ sudo make install clean After we update the configuration from
oinkmaster.conf, we can check and download new
threats signatures. Use this command as follows.

$ cd $ sudo oinkmaster -C
/usr/ports/security/oinkmaster
 /usr/local/etc/oinkmaster.conf
$ sudo make install clean /usr/local/etc/suricata/rules

Select OK or you can tick any configuration you Next, enable the Suricata on boot by adding this
want. But for this experiment, just use the default configuration in /etc/rc.conf.
configuration. We can modify our configuration later

22
suricata_enable=”YES”
 stdout (log), Treasure Data, Amazon S3, MongoDB,
firewall_enable=”YES”
 ElasticSearch, and Forwarding. We need to
natd_enable=”YES” configure two types of files. First is the source, from
which logs fluentd will read. Second is the output,
We can then issue the command service suricata should we forward our output file to another
start to start the daemon. Notice that we should see application or just put it in the database.
information like this.
From this experiment, I’m using File (in_tail) as
Starting suricata.
 source to read Suricata’s logs. For the output, I’m
19/11/2017 -- 17:02:58 - <Notice> using Elasticsearch. So we have to install
- This is Suricata version 4.0.0 Elasticsearch plugin in the server. We can use
RELEASE fluentd-ui plugin installation to install the
Elasticsearch plugin.
Double check with ps ax command and the logs in
/var/log/suricata/suricata.log. Next step is to Fluentd is not used for analyzing the logs so we do
configure the existing Fluentd (for installation you the analyzing process with another analytics engine.
can check in our previous article). Fluentd comes We have Suricata running and we can see the logs
with a friendly user interface called fluentd-ui to from /var/log/suricata. Back at the dashboard in
connect between fluentd, source and output. We fluentd-ui, choose “Add Source and Output” and
need to install it first using this command. choose “File” from the Source group. Choose the file
path information the same with Suricata logs.
$ sudo gem install -V fluentd-ui Because we will record all logs from Suricata with
syslog format and json format, so we choose
After installation of fluentd-ui finishes, start the suricata.log (/var/log/suricata/suricata.log) using
service. Default address and port for fluentd-ui are syslog format and events from suricata with json
0.0.0.0 and 9292. The user interface is web-based format. Scroll down the fluentd-ui interface and we
so we can access it anywhere. Default login user is see the contents from suricata.log like this.
“admin” with password “changeme” (without double
quotes). You have to change the default password. 20/11/2017 -- 03:16:38 - <Warning> -
After login we must define the PID file, log file and [ERRCODE: SC_WARN_IPFW_UNBIND(86)] -
config file for fluentd. Remember we have to put the Unable to disable ipfw socket: Socket
same location configuration file with existing fluentd is not connected
installation.
20/11/2017 -- 03:17:06 - <Notice> -
PID file : This is Suricata version 4.0.0 RELEASE
/var/log/fluentd/fluentd-ui/fluent.pid

20/11/2017 -- 03:17:10 - <Notice> - all
Log file :
3 packet processing threads, 4
/var/log/fluentd/fluentd-ui/fluent.log

management threads initialized, engine
Config file :
started.
/usr/local/etc/fluentd/fluent.conf
Press next button to select file format. There are
After configuring the pid, log, and conf files, we can
several formats, incuding syslog, nginx, json, and
proceed to the dashboard. Using this interface we
csv. We pick syslog and for time_format we have to
can manage the fluentd service (start, stop, restart).
match with the log from suricata.log. Notice that in
Press the “Add Source and Output” link on the left
suricata.log we see the date and time as follows.
dashboard. We can see the workflow from fluentd
and current settings from default configuration. In 20/11/2017 -- 03:17:06 - <Notice> -
the Source Group we have File, Syslog Protocol, This is Suricata version 4.0.0 RELEASE
Monitoring Agent, HTTP, and Forwarding (receiving
from another fluentd). In the Output Group we have

23
Next is tag we can just put “var.*” refers to which “Elasticsearch”. Assuming you have installed your
directory suricata’s log located. Press Next button own Elasticsearch on a different server, you can
once again and we have confirmation page. Press change host or any label with localhost below to
“Update & Restart” button to finish the configuration your own server name or IP address. Fill the form
and restart fluentd. This is from my configuration. and follow this configuration as follows.

<source> <match **>

type tail type elasticsearch

path /log/suricata/suricata.log host localhost

tag var.* port 9200

format syslog index_name via_fluentd

time_format %d/%m/%%Y -- %H:%M:%S type_name via_fluentd

pos_file /tmp/fluentd--1511123793.pos logstash_format false

</source> utc_index true

This input source only logs suricata service. You hosts


have to create another input for Suricata events http://elastic:changeme@localhost:9200
(eve.json). Events from Suricata log is recorded using
json. This is why for this input source we use json as include_tag_key false
a format. As for pos_file just leave it as default. The
</match>
pos_file tag acting as a record position from the log
file which is suricata.log. Press the Advanced Before your fluentd server can connect to your
Settings and tick “Read from head” which means Elasticsearch server, you have to create your own
fluentd will read the file log from the first line of the index on your Elasticsearch server. This is how to
log. create an index called “via_fluentd” in Elasticsearch.
This name should be the same as the one specified
<source> above in the Output configuration.
type tail curl -XPUT
'localhost:9200/via_fluentd?pretty' -H
path /log/suricata/eve.json
'Content-Type: application/json' -d'
tag var.*
{
format json
"settings" : {
time_key timestamp
"index" : {
read_from_head true
"number_of_shards" : 3,
pos_file /tmp/fluentd--1511132923.pos
"number_of_replicas" : 2
</source>
}
Now we configure our output plugin. Go to
}
dashboard again and choose in Output group

24
} This count is a prove that our fluentd successfully
sent the log into Elasticsearch server.
'
You can try browse your Elasticsearch server using
To test our configuration and make sure that fluentd your browser using this address
can connect to Elasticsearch, stop and start the http://localhost:9200/via_fluentd
suricata service. In fluentd stdout, we should see
something like this. It should work with json format output with
information taken from suricata and forwarded by
2017-11-20 07:43:01 +0900 [info]: #0 fluentd.
fluentd worker is now running worker=0

2017-11-20 07:44:02 +0900 [info]: #0


Conclusion
Connection opened to Elasticsearch
Fluentd is very modular and flexible. Almost all logs
cluster => {:host=>"localhost",
from known applications can be processed and
:port=>9200, :scheme=>"http",
forwarded. This makes fluentd very flexible and
:user=>"elastic",
make it easier for system administrator to manage
:password=>"obfuscated"}
their logs system.
Now check your Elasticsearch server and send the
command below.

andrey@nada:~$ curl -XGET


'elastic:changeme@nada.clouds.web.id:92
00/_cat/count/via_fluentd?v&pretty'

Your Elasticsearch server should respond with


something like this.

epoch timestamp count

1511099729 13:55:29 4

Meet the Author

Andrey Ferriyan is a writer, researcher and practitioner. Python and R enthusiasts.


Experiences in UNIX-like servers (GNU/Linux, FreeBSD and OpenBSD). Data
Scientist wannabe. Area of interests including Information Security, Machine
Learning and Data Mining.

Now He is a student at Keio University under LPDP (Indonesia Endowment Fund


for Education). He leads startup company in Indonesia called ATSOFT with my
friends.

25
FREEBSD

FreeBSD, Google Cloud,


and Dual ECC/RSA

Let's Encrypt Certificates
Here's how I deployed a website to FreeBSD on the Google Cloud Platform and its Free
Google Cloud Platform. I set up HTTPS with free
Let's Encrypt (letsencrypt.org) TLS certificates for Tier
both RSA and ECC, and set up automatic renewal of
the dual certificates. The Google Compute Engine provides high
performance, and the price is certainly right! The
None of this is difficult, but I discovered that some Free Tier includes several products that are always
steps aren't openly supported or well documented. free up to some usage limits, of course, with a low
Specifically, running FreeBSD on Google’s IaaS or cost beyond that. For details, see:
Infrastructure as a Service cloud environment, and https://cloud.google.com/compute/
automatically renewing dual RSA/ECC Let's Encrypt
certificates. The Free Tier includes one VM with plenty of
horsepower for a website. Their f1-micro instance
This article is aimed at people who are in a situation gives you a single-core Intel Xeon 2.20 GHz CPU
similar to mine when I started. First, I’ll assume with 614 MB of RAM. It's a shared-core machine,
you’re reasonably comfortable with FreeBSD — no and you get 20% of a virtual CPU all the time with
need to explain why it's a great choice for OS, or bursts up to 100%. After the first one, each
how to use the pkg command and control the additional f1-micro machine costs just US$ 3.88 per
Apache service. month.

Second, I expect that you're familiar with public The f1-micro VM comes with 30 GB of constant disk
cloud concepts and terminology, but you don't storage based on locally attached solid-state drives.
necessarily have any experience with Google's That's right, the storage is all SSD, mechanical disks
specific offerings. aren't even a choice. Additional storage is US$ 0.04
per GB per month, although 30 GB was more than
Are you still with me? Then let's get started! enough for me.

26
You get one static external IPv4 address. IPv6 is provides great performance. Below is the Domains’
currently only available when you are also using load dashboard where I have registered cromwell-intl.com
balancers, but they say general purpose IPv6 is set by IP address and set up A and CNAME records. It's
for release. Google’s data centers have plenty of very easy to use.
bandwidth. Ingress traffic is unlimited, and most of
the first 1 GB egress traffic per month is free. Beyond The A record for "@", means the domain itself and it
the first gigabyte of outbound traffic, the pricing is defines the IPv4 address.
complicated but quite cheap.
The CNAME record specifies that
The first free gigabyte is to all destinations other than www.cromwell-intl.com is an alias, and the canonical
Australia and China other than Hong Kong. It's US$ name is simply cromwell-intl.com.
0.12 per GB to most of the world after the first free
Therefore, regardless of the user's assumption that
gigabyte. All traffic to Australia is US$ 0.19/GB, and
the name has the "www." or not, it resolves to the
all traffic to China other than Hong Kong is US$
same IP address. In a later step, I will configure
0.23/GB.
Apache to redirect all requests for the "www." version
to the simpler name. The search engines will see the
Reserve an IP Address
site as a single site, not a collection duplicated
across two hostnames.
Follow Google's instructions to specify your
geographic region. The Free Tier is only available in
some regions. My server is located in Oregon. Then,
follow their instructions to reserve an external IP
address. Once your VM is running and associated
with that address, you can go back and specify that it
should be static, not changing after a reboot.

Your initial steps are done through a web interface. I


have used the AWS or Amazon Web Services
dashboard on a number of projects. However, within
10 minutes of my first exposure to the Google Cloud
Platform dashboard, I found it much more intuitive
and informative.

Below is the view after the VM was running and using


the reserved IP address.

Deploying the VM
There isn't a simple point-and-click method to choose
a FreeBSD VM image. It appears as if FreeBSD isn't
a choice! However, FreeBSD images are available
through the freebsd-org-cloud-dev project.

Set Up DNS FreeBSD is supported on the Google Cloud Platform


because it works just fine. There isn't support in the
US$ 12 transfers a domain from your current form of assistance, they don't make it as simple as
registrar to Google Domains, and adds 1 year of other operating systems. But there is an easy
registration. After that, it's US$ 12 per year. That's command-line way to deploy it.
low cost considering that Google's DNS service

27
First, install the Google Cloud SDK package on your Set Up SSH
local system. There is a google-cloud-sdk FreeBSD
package, or get it for various operating systems from: Verify that the virtualized firewall will pass inbound
https://cloud.google.com/sdk/downloads SSH. Let's go ahead and add HTTP and HTTPS, and
remove the unneeded rule allowing RDP. We'll also
This gives you the gcloud command-line interface
make sure that ICMP is allowed, so we can do simple
set-up to run under Bash. I found gcloud much easier
ping tests.
to set up and use than the corresponding AWS
command-line toolkit. Once the server is deployed,
you can connect with SSH, and you seldom need
gcloud.

Start by using gcloud to see the list of images


currently available from the FreeBSD project:

$ gcloud compute images list \



--project freebsd-org-cloud-dev \

--no-standard-images

That's a lot! Let's narrow that down to the stable


RELEASE versions:

$ gcloud compute images list \
 Follow Google's instructions to add SSH keys:
--project freebsd-org-cloud-dev \

--no-standard-images | grep -i release https://cloud.google.com/compute/docs/instances/ad
ding-removing-ssh-keys
Now, deploy your FreeBSD server. Change the
version as needed, and change web to your desired Once I get a basic .ssh/authorized_keys file in place,
hostname. The 30 GB disk size is the maximum size I do everything with ssh and scp. The web interface
for the free tier. It was much more than enough to is just for general monitoring and, maybe, rebooting.
hold my site.
Your user can become root with sudo bash, and you
$ gcloud compute instances create web \

--image-project=freebsd-org-cloud-dev \
 can make further changes. Do not assign passwords
--image=freebsd-11-1-release-amd64 \
 to any user, stick with cryptographic authentication
--boot-disk-size=30GB \
 over SSH.
--boot-disk-type=pd-standard \

--machine-type=f1-micro
If you add a user to group wheel, they can become
root by simply running the command su because of
Now you can start the VM through the web
the contents of /etc/pam.d/su.
dashboard. It's running!

Networking
The Ethernet interface will be vtnet0. Your server is in
a private network, a VPC or Virtual Private Cloud,
something like the 10.138.0.0/24 network with just
your server and a (virtual) router. The router runs
NAT, mapping your server to your external static IP
address.

28
Packages
You want to use the ACPI-fast device:
The FreeBSD image comes with several packages
added to the basic install. I found these 22 packages # sysctl kern.timecounter.hardware

on the freshly deployed image: kern.timecounter.hardware: TSC

# sysctl kern.timecounter.choice

bash, ca_root_nss, curl,
kern.timecounter.choice: i8254(0)
firstboot-freebsd-update, firstboot-growfs,
flock, gettext-runtime, google-cloud-sdk, ACPI-fast(900) TSC(1000) dummy(-1000000)

google-daemon, google-startup-scripts, # sysctl kern.timecounter.hardware=ACPI-fast

indexinfo, libffi, libnghttp2, panicmail, kern.timecounter.hardware: TSC -> ACPI-fast
pkesh, pkg, python, python2, python27,
readline, sudo. I then added a line to each of /etc/sysctl.conf and
/etc/rc.conf.
I first installed all available updates for the existing
packages. Then, I added bind-tools and lsof for $ grep timecounter /etc/sysctl.conf

troubleshooting, and vim for my personal preference. kern.timecounter.hardware=ACPI-fast

# grep ntpd /etc/rc.conf

Thereafter, I added the packages needed for ntpd_enable=YES

Apache/PHP web service: apache24 and ntpdate_enable=YES
mod_php71 and their dependencies.
I rebooted to test, and now the clock was correct and
stayed spot-on.
Correcting Clock Problems
By now, the system has been running long enough Set Up Apache
that I noticed the huge clock drift. Within a minute,
the system clock drifts by several seconds. This is a I set-up Apache 2.4 with PHP 7.1, and got the site
known issue with FreeBSD running on Linux/KVM. served out over HTTP. This is well-documented
Let's see what the virtualized platform provides: elsewhere. So let's move to the next step.

$ dmesg | less
 Public-Key Security


[... output deleted ...]

random: unblocking device.
 Asymmetric cryptography, also called public-key
ioapic0 <Version 1.1> irqs 0-23 on motherboard

cryptography, bases its security on a trapdoor
Timecounter "TSC" frequency 1837606598 Hz
function. The trapdoor function is easy to compute in
quality 1000

one direction, but difficult to compute in the opposite
random: entropy device external interface

direction. RSA, which was developed in the late
[... output deleted ...]

1970s, relies on the difficulty of factoring the product
atrtc0: <AT realtime clock> port
0x70-0x71,0x72-0x77 irq 8 on acpi0
 of two very large prime numbers. It is easy to multiply
Event timer "RTC" frequency 32768 Hz quality 0
 integers, even the ones with hundreds of digits.
Timecounter "ACPI-fast" frequency 3579545 Hz However, it is impractically difficult to start with such
quality 900
 a product and figure out which two large prime
acpi_timer0: <24-bit timer at 3.579545MHz> numbers went into it.
port 0xb008-0xb00b on acpi0

[... output deleted ...]
 Then people got worried: what if someone develops
attimer0: <AT timer> at port 0x40 on isa0
 a general-purpose quantum computer? Shor's
Timecounter "i8254" frequency 1193182 Hz Algorithm could quickly factor very large numbers if
quality 0
 you run it on such a platform.
attimer0: Can't map interrupt.

ppc0: cannot reserve I/O port range
 Around this time, people started using mobile devices
Timecounters tick every 1.000 msec
 for Internet access. However, smartphones with fast
[... output deleted ...]

29
multi-core CPUs had not yet been developed. We're trusted by browsers. (Advanced note: these are DV
talking about early BlackBerry days. or Domain Verification certificates, not EV or
Extended Verification, limiting browsers' trust in them,
So, Elliptic Curve Cryptography or ECC suddenly but the price is certainly right!)
became popular. Its trapdoor function is based on a
discrete logarithm, entirely different from RSA's Let's Encrypt certificates are only good for 90 days.
factoring. Analysis by NSA and NIST showed that The short certificate lifetime makes automated
ECC provides same security with much smaller keys renewal important.
than RSA, requiring much less computation.
Yes, you can set up automated renewal of dual
So, two advantages: higher performance, and ECC/RSA Let's Encrypt certificates! I found that
perceived resistance to sudden obsolescence when this wasn't documented very well. Google searches
quantum computers appear. Certificate authorities lead to frequently-referenced blog postings about it
began issuing dual certificates for sites: one based being impossible, or how there is an overly complex
on ECC which newer clients would prefer for kludge when working around it. Hence the main point
performance, and RSA as a fall-back. of this article is that it's not hard to figure out, and it's
quite easy to set it up once you know the trick
Since then, cryptographers have discovered that
ECC will be just as susceptible as RSA to attack by Creating the RSA Certificate
quantum computers. But ECC still has a performance
advantage. ACME, the Automated Certificate Management
Environment, is a protocol for interacting with the
In August 2015, the NSA announced that ECC wasn't
Let's Encrypt CA. You use the certbot program to
a backup for RSA when facing the threat of quantum
carry out the various steps. Install the py27-certbot
computing cryptanalysis, to the point that government
package to get it.
agencies and contractors considering a migration
from RSA to ECC shouldn't bother. They later Now you're ready to make your first certificate:
modified the page, and thereafter, took it down. See
the archived update here. # certbot certonly --webroot \

-w /usr/local/www/htdocs/ \

We need post-quantum or quantum-safe -d example.com -d www.example.com

asymmetric ciphers. Several families of Key
post-quantum cipher algorithms are being explored:
lattice-based cryptography, code-base cryptography, I deliberately provided the root location of the web
multivariate polynomial cryptography, and others. document, and listed the domain names. Yes, clients
See the Post-Quantum Crypto conference series for will be redirected from www.example.com to
details. example.com as needed, but they must first make a
secure connection server and ask for the longer
To get back on track, ECC has a definite name with "www.".
performance advantage over RSA at the same
security levels. We want to support both. There is some narrative output. You are asked for an
email address in case they need to send you an
urgent renewal or security notice. You agree to the
TLS with Dual ECC/RSA Let's Encrypt
terms of service, then answer whether it's OK to
Certificates share your email address with the EFF, and you are
done.
Let's Encrypt (letsencrypt.org) is a certificate
authority founded by the Electronic Frontier I didn't tell it anything about the cryptography, so it
Foundation, the Mozilla Foundation, the University of generated and installed a 2048-bit RSA key pair.
Michigan, Akamai Technologies, and Cisco
Corporation. They issue free TLS certificates that are What Did You Get?

30
The key pair, certificate, and associated files have Notice the directories archive, containing the key
been created and saved under files, and live, containing links to those files. We will
/usr/local/etc/letsencrypt. Let's see what files were return to this detail in a bit.
saved there.
Creating the ECC Certificate
# cd /usr/local/etc/letsencrypt

# tree -F

Let’s now create an ECC private key and certificate.
.

|-- accounts/

We need a reasonably recent version of openssl.
| |-- acme-staging.api.letsencrypt.org/
 Check what yours is capable of:
| | `-- directory/

$ openssl ecparam -list_curves | less
| | `--
d72ae2a5cf968487add7cbdece6e3aab/

I will use elliptic curve P-384, designated secp384r1,
| | |-- meta.json

as it is the strongest elliptic curve included in NSA
| | |-- private_key.json

Suite B cryptography. See the U.S. NIST SP 800-57
| | `-- regr.json

"Recommendation for Key Management" for its
| `-- acme-v01.api.letsencrypt.org/

definition, and the following comparison of relative
| `-- directory/

strength against brute force attack:
| `--
5f78856fecb3b21a157f41d986716e2c/

Key Length in Bits for Approximately
| |-- meta.json

| |-- private_key.json
 Equal Resistance to Brute-Force

| `-- regr.json

Attacks, per NIST/NSA
|-- archive/

| `-- example.com/
 Security Symmetric Asymmetric Elliptic

| |-- cert1.pem

Strength (3DES, AES) (RSA, DSA) Curve
| |-- chain1.pem

| |-- fullchain1.pem
 80 80 1024 160

| `-- privkey1.pem

|-- csr/

112 112 2048 224
| `-- 0000_csr-certbot.pem

|-- keys/

| `-- 0000_key-certbot.pem
 128 128 3072 256
|-- live/

| `-- example.com/

| |-- README
 192 192 7680 384
| |-- cert.pem ->
../../archive/example.com/cert1.pem

| |-- chain.pem -> 256 256 15,360 512
../../archive/example.com/chain1.pem

| |-- fullchain.pem ->
../../archive/example.com/fullchain1.pem

| `-- privkey.pem ->
../../archive/example.com/privkey1.pem
 The first time I used certbot, I let it generate an RSA
`-- renewal/
 key pair. Since I need to generate an ECC
`-- example.com.conf certificate-signing request, I’ll start by generating

 an ECC private key:
14 directories, 18 files
$ openssl ecparam -genkey -name secp384r1 | openssl
ec -out ecc-privkey.pem

31
Before generating the CSR or Certificate Signing By default, it generates RSA keys with directory
Request, I must slightly change the OpenSSL archive/example.com containing the actual files, and
configuration to enable multiple names, both with and live/example.com containing symbolic links pointing
without "www.": to them. You can rename the archive and live
directories, but the files must have specific
Edit /etc/ssl/openssl.cnf.
names.
Find and uncomment the entry:
The "archive" directory, or whatever you end up
req_extensions = v3_req naming it, must have files named precisely
Add a line below that:
cert1.pem, chain1.pem, fullchain1.pem, and
privkey1.pem.
subjectAltName = @alt_names
The "live" directory, again possibly renamed, must
Add a new stanza at the end of the file:
have symbolic links with those same names minus
## Added the "1", precisely cert.pem, chain.pem, fullchain.pem,
and privkey.pem.
[alt_names]

DNS.1 = www.example.com
The automated RSA installation also created a
directory named renewal containing a configuration
DNS.2 = example.com file named for the domain plus ".conf".

First, I rearranged the existing hierarchy under


I can now generate the CSR. It will ask you for a /usr/local/etc/letsencrypt.
2-letter country code, state or province, locality, and
Rename the existing "archive" and "live" directories
so on.
rsa-archive and rsa-live.
$ openssl req -new -sha256 -key ecc-privkey.pem
-nodes -outform pem -out ecc-csr.pem

Recreate the symbolic links in rsa-live/example.com
to point to the relocated "archive" files.

Ask Let's Encrypt to generate a certificate. This time Edit renewal/example.com.conf and make
we pass it our new CSR. corresponding changes to the paths.

$ certbot certonly -w /usr/local/www/htdocs \
 Rename that file rsa-example.com.conf.


-d example.com -d www.example.com \

--email bob.cromwell@comcast.net \
 Verify that renewal still works:
--csr ecc-csr.pem --agree-tos
certbot renew --dry-run
This gives us three new files in the local directory:
Next, create new directories ecc-archive and ecc-live,
0000_cert.pem = The certificate itself each with a subdirectory named for the domain.
0000_chain.pem = The signing chain
Then:

0001_chain.pem = The full chain including our certificate Move the ECC files I just created into the ecc-archive
area, changing the names as required.
Solving the Mystery — Automatically
Create the symbolic links under ecc-live.
Renewing Dual Certificates
Rename the RSA files in csr and keys, and move the
It took some research after initial frustration to learn corresponding ECC files into those areas.
that certbot is very fussy about file names when it
comes to renewal.

32
Copy the file in renewal to ecc-example.com.conf |-- rsa-archive/

and edit that new file so its contents refer to the ECC | `-- example.com/


files. | |-- cert1.pem



| |-- chain1.pem

The result of all this is the following, where: | |-- fullchain1.pem

| `-- privkey1.pem

yellow indicates renamed files and changed file content, `-- rsa-live/

`-- example.com/

green indicates (re)created symbolic links,
|-- README

blue indicates new files and directories, and |-- cert.pem ->
../../rsa-archive/example.com/cert1.pem

grey indicates unchanged files |-- chain.pem ->
../../rsa-archive/example.com/chain1.pem

# cd /usr/local/etc/letsencrypt
 |-- fullchain.pem ->
# tree -F
 ../../rsa-archive/example.com/fullchain1.pem

.
 `-- privkey.pem ->
|-- accounts/
 ../../rsa-archive/example.com/privkey1.pem

| |-- acme-staging.api.letsencrypt.org/
 

| | `-- directory/
 18 directories, 29 files

| | `-- d72ae2a5cf968487add7cbdece6e3aab/

| | |-- meta.json

| | |-- private_key.json
 The automated renewal files now contain the
| | `-- regr.json
 following:
| `-- acme-v01.api.letsencrypt.org/

# cat renewal/ecc-example.com.conf

| `-- directory/

# renew_before_expiry = 30 days

| `-- 5f78856fecb3b21a157f41d986716e2c/

version = 0.18.2

| |-- meta.json

archive_dir =
| |-- private_key.json
 /usr/local/etc/letsencrypt/ecc-archive/example.com

| `-- regr.json
 cert =
|-- csr/
 /usr/local/etc/letsencrypt/ecc-live/example.com/cer
t.pem

| |-- ecc-csr.pem

privkey =
| `-- rsa-csr.pem
 /usr/local/etc/letsencrypt/ecc-live/example.com/pri
|-- ecc-archive/
 vkey.pem

| `-- example.com/
 chain =
/usr/local/etc/letsencrypt/ecc-live/example.com/cha
| |-- cert1.pem

in.pem

| |-- chain1.pem

fullchain =
| |-- fullchain1.pem
 /usr/local/etc/letsencrypt/ecc-live/example.com/ful
| `-- privkey1.pem
 lchain.pem

|-- ecc-live/
 

| `-- example.com/
 # Options used in the renewal process

| |-- cert.pem -> [renewalparams]

../../ecc-archive/example.com/cert1.pem

authenticator = webroot

| |-- chain.pem ->
installer = None

../../ecc-archive/example.com/chain1.pem

| |-- fullchain.pem -> account = 5f78856fecb3b21a157f41d986716e2c


../../ecc-archive/example.com/fullchain1.pem
 webroot_path = /usr/local/www/htdocs,



| `-- privkey.pem -> [[webroot_map]]

../../ecc-archive/example.com/privkey1.pem
 www.example.com = /usr/local/www/htdocs

|-- keys/
 example.com = /usr/local/www/htdocs

| |-- ecc-privkey.pem
 

| `-- rsa-privkey.pem
 # cat renewal/rsa-example.com.conf

|-- renewal/
 # renew_before_expiry = 30 days


| |-- ecc-example.com.conf
 version = 0.18.2



archive_dir =
| `-- rsa-example.com.conf


33
/usr/local/etc/letsencrypt/rsa-archive/example.com
 ---------------------------------------------------

cert = 

/usr/local/etc/letsencrypt/rsa-live/example.com/cer
---------------------------------------------------

t.pem

Processing
privkey =
/usr/local/etc/letsencrypt/renewal/ecc-example.com.
/usr/local/etc/letsencrypt/rsa-live/example.com/pri
conf

vkey.pem

chain = ---------------------------------------------------

/usr/local/etc/letsencrypt/rsa-live/example.com/cha 

in.pem
 ---------------------------------------------------

fullchain =

/usr/local/etc/letsencrypt/rsa-live/example.com/ful
The following certs are not due for renewal yet:

lchain.pem


 /usr/local/etc/letsencrypt/rsa-live/example.com/ful
# Options used in the renewal process
 lchain.pem (skipped)

[renewalparams]

authenticator = webroot
 /usr/local/etc/letsencrypt/ecc-live/example.com/ful
lchain.pem (skipped)

installer = None

No renewals were attempted.

account = 5f78856fecb3b21a157f41d986716e2c

---------------------------------------------------
webroot_path = /usr/local/www/htdocs,

[[webroot_map]]

Unless you run certbot with the --force-renewal
www.example.com = /usr/local/www/htdocs

option, it will wait until there are only 30 days left.
example.com = /usr/local/www/htdoc


We can use the openssl tool to parse and display the


Now test this: certificates.

# certbot renew --dry-run
 # openssl x509 -in ecc-live/example.com/cert.pem


-text -noout


Did it work? Great! # openssl x509 -in rsa-live/example.com/cert.pem
-text -noout

Automated Renewal
Enabling HTTPS With Those Dual
Now, let's automate the renewal. Set up a crontab job Certificates
to run certbot in renewal mode twice a day. It won't
do anything until there are 30 days left. We'll do this Edit the httpd.conf configuration file and add the
frequently so we can spot any problems quickly. Pick following to the file, changing the hostname and file
random times: system paths as needed. Make sure to use the file
fullchain.pem, which contains the full certificate
# crontab -l

# min hr day-of-month month day-of-week command
 chain, and not cert.pem which has just your site's
44 4 * * * certbot renew > /root/certbot-output certificate.
2>&1

44 16 * * * certbot renew > /root/certbot-output # Put these directives at the global level:

2>&1
 LoadModule ssl_module libexec/apache24/mod_ssl.so


# cat /root/certbot-output
 Listen 443



Saving debug log to 

/var/log/letsencrypt/letsencrypt.log
 # Put these within individual VirtualHost stanzas

Cert not yet due for renewal
 # if you are hosting several sites on one server.

Cert not yet due for renewal
 <VirtualHost *:443>


 ServerName example.com

---------------------------------------------------
 SSLEngine on

Processing # ECC secp384r1

/usr/local/etc/letsencrypt/renewal/rsa-example.com. SSLCertificateFile
conf
 "/usr/local/etc/letsencrypt/ecc-live/example.com/fu

34
llchain.pem"
 single site, given its size and traffic, I didn't see a big
SSLCertificateKeyFile
advantage of one method over the other.
"/usr/local/etc/letsencrypt/ecc-live/example.com/pr
ivkey.pem"

# RSA
 Improving the TLS Configuration
SSLCertificateFile
"/usr/local/etc/letsencrypt/rsa-live/example.com/fu Apache has a good SSL/TLS how-to document:
llchain.pem"

https://httpd.apache.org/docs/2.4/ssl/ssl_howto.html
SSLCertificateKeyFile
"/usr/local/etc/letsencrypt/rsa-live/example.com/pr
ivkey.pem"

Even more useful, Mozilla has a configuration
</VirtualHost> generator:
https://mozilla.github.io/server-side-tls/ssl-config-gen
Restart Apache, and verify that you can connect with erator/
both HTTP and HTTPS.
Select your server, its version, and your OpenSSL
Redirect to HTTPS without "www." versions, and lastly, select the security profile.

The goal is to accept all connections, redirecting all Which security profile should you use? It
of these: depends...

http://example.com/some/path/ Let's say you're setting up a server for use within


your organization, and you have full control of the
http://www.example.com/some/path/ desktop systems and any portable laptops that could
be connected from outside. In that case, I
https://www.example.com/some/path/ recommend the strictest "Modern" profile. All your
client machines will need to be fairly current, but that
to this: should already be the case.

https://example.com/some/path/ However, let's say that you want to be open to all


clients from the public. That's my situation. It would
Add the following to your .htaccess file in the root of
be nice if everyone used up-to-date operating
the web site. If the RewriteEngine line is already in
systems and browsers, but I don't want to block or
the file, don't duplicate it.
even inconvenience people with outdated platforms.
# Remove "www." and redirect HTTP to HTTPS

RewriteEngine on
 I used the "Intermediate" profile as a starting point.
# Use a standard variable and a tagged regular Here is what I added towards the end of the
expression to
 httpd.conf configuration file, before and outside the
# replace the URL with "https://", the host name,
VirtualHost stanza. Hence it will apply to all virtually
and the

hosted websites I eventually set up on the server.
# path minus any leading "www.":

The SSLCipherSuite line is enormously long. I
RewriteCond %{HTTP_HOST} ^www\.(.*)$ [NC]

RewriteRule ^(.*)$ https://%1/$1 [R=301,L]
 started with what Mozilla's "Intermediate" profile gave
# If they asked for the non-www name but with HTTP, me, and reordered that to put 3DES or "DES-CBC3"
build a
 at the end.
# new HTTPS URL with the host name and the path:

RewriteCond %{HTTPS} off
 That provides 3DES as a fallback position for
RewriteRule ^(.*)$ connections from IE 8 on XP.
https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

Now test the various redirection cases.

Yes, I did it with the site-specific .htaccess file. I could


have instead done it with slightly different syntax in
the server-wide httpd.conf configuration file. For the

35
# TLS only, no SSL

SSLProtocol all -SSLv2 -SSLv3

# Specify ciphers in a preferred order. I
reordered what the configuration

# generator gave me, putting 3DES ("DES-CBC3") at
the end.

SSLCipherSuite
ECDHE-ECDSA-CHACHA20-POLY1305:[...much deleted, see
above]

SSLHonorCipherOrder on

# Disable compression and session tickets

SSLCompression off

SSLSessionTickets off

# Enable OCSP Stapling

LoadModule socache_shmcb_module
libexec/apache24/mod_socache_shmcb.so

SSLUseStapling On

Try It Out!
SSLStaplingCache "shmcb:logs/ssl_stapling(32768)"

I would suggest that you try FreeBSD on the Google
# Enable session resumption (caching)

SSLSessionCache "shmcb:logs/ssl_scache"

Cloud Platform. There's zero cost for a short
# Insist on HSTS or HTTP Strict Transport Security
 experiment, and I think many people will like that
Header always set Strict-Transport-Security environment.
"max-age=31536000; includeSubDomains; preload"

And with that, an A+ evaluation from the Qualys And More...


analyzer! See how your server scores at
I have more details on my site with further Apache
www.ssllabs.com.
details, including setting up some HTTPS headers
that can further enhance security. To read further,
Monitoring The Dashboard
visit:
You can get a quick and clear overview of recent
https://cromwell-intl.com/open-source/google-freebsd
server activity with the Google Cloud Platform
-tls/
dashboard. That's "Home" in the 3-line menu at the
upper left in the GCP’s pages. You can monitor the
network traffic and CPU utilization, and keep an eye
on the month's billing so far, among other features. Meet the Author
Also, you can customize what you see here. I have Bob Cromwell has
network traffic and monthly billing up top and CPU been using OpenBSD
utilization below the traffic graph. To me, this seems since, well, not sure
how long… Some
like a big improvement over the AWS dashboard, time in the late 1990s.
where I have to track down the pieces on various He’s used Linux since
screens. you downloaded 40+
floppy images, some
time around 1993-1994. Before that he had used
UNIX, SunOS and forms of BSD, at Purdue since the
mid 1980s. He got a BSEE at Purdue back then,
worked at the university, grad school, Ph.D. in
electrical and computer engineering, has done
consulting since 1992. He’s taught courses for
Learning Tree International since the mid 1900s, and
has written courses for them since the late 1990s.

36
HEY GOLIATH...

MEET DAVID
TRUENAS® PROVIDES MORE PERFORMANCE, FEATURES, AND CAPACITY PER-
DOLLAR THAN ANY ENTERPRISE STORAGE ARRAY ON THE MARKET.

Introducing the TrueNAS X-Series: Perfectly suited for core-edge configurations and enterprise
workloads such as backups, replication, and file sharing.

Unified: Simultaneous SAN, NAS, and object protocols to support multiple applications

Scalable: Up to 120 TB in 2U and 720 TB in 6U

Safe: High Availability ensures business continuity and avoids downtime

Reliable: Uses OpenZFS to keep data safe

Trusted: TrueNAS is the Enterprise version of FreeNAS®, the world’s #1 Open Source SDS

Enterprise: Enterprise-class storage including unlimited instant snapshots and advanced storage
optimization at a lower cost than equivalent solutions from Dell EMC, NetApp, and others

The TrueNAS X10 and TrueNAS X20 represent a new class of enterprise storage. Get the full
details at iXsystems.com/TrueNAS.

Copyright © 2017 iXsystems. TrueNAS and FreeNAS are registered trademarks of iXsystems, Inc. All rights reserved.

37
FREEBSD
Mongoose Embedded Web
Server on FreeBSD
You will learn …

• What is an Embedded Software

• What is a Web Server

• What is Mongoose

• How To Install Mongoose On FreeBSD

• Serving A Web Site With Mongoose

• How To Secure Mongoose Web Server

What is an Embedded Software? Software development requires the use of a


cross-compiler which runs on a computer but
An embedded software is a computer program produces executable code for the target device.
created to control specific devices. Typically, these Debugging requires the use of an in-circuit emulator,
devices have some memory, storage, and JTAG or SWD. Software developers often have
performance limitations. access to the complete kernel (OS) source code.
These limitations force developers to use C or
An embedded software program must be stable, embedded C++.
clean, and fast. Memory footprint of embedded
software is so critical that developers must create it
with caution.

No matter it’s a TV or a missile, embedded software


must to perform task flawless. Embedded software
needs to include all required device drivers. The
device drivers are written for the specific hardware.
The software is highly dependent on the CPU and
specific chips chosen.

38
What is a Web Server? Mongoose surpassed 2,000,000 record of
downloads.
A web server is a software system that processes
requests via HTTP or any other protocols. The Functions of Mongoose include:
primary function of a web server is to manage
• Cross-platform, support for Unix/Linux, *BSD,
communication between client and server, and this
eCos, Windows, OS X, QNX, and more.
takes place using the Hypertext Transfer Protocol
(HTTP).
• CGI, SSI, Digest (MD5) authorization, WebSocket,
and WebDAV support
Web servers are not only used for serving the
internet but also as a part of a system for monitoring
• Resumed download, URL rewriting support, and
or administering. All we need is a browser to access HTTP proxy support
these embedded applications.
• SSL support, both one-way and two-way SSL
There are two types of web servers:
• IP address-based ACL, Windows service, GET,
• User-mode web server POST, HEAD, PUT, and DELETE methods

• Kernel-mode web server


How To Install Mongoose On FreeBSD?
User-mode web servers are slower because the
To install mongoose from the ports mechanism:
system takes time to respond to the request for
allocation of resources, but are more secure. In # cd /usr/ports/www/mongoose
situation where the web server is compromising, only
the web server process may be dropped at crash. # make install clean

A kernel-mode web server can process more queries To install mongoose with the package manager:
per second or QPS.
#pkg install mongoose
What is Mongoose?
You can start mongoose at boot time by:
Mongoose is a cross-platform embedded web server
# sysrc mongoose_enable="YES"
which is available under GPL v2 and commercial
licenses, and has a small size.
If you restart your machine, mongoose web server
Mongoose is built on top of the Mongoose Embedded will serve /var as http file sharing on port 8080. You
Library which can be used for the implementation of can see contents of /var by browsing 127.0.01:8080:
RESTful services to serve Web GUI on embedded
# curl 127.0.0.1:8080
devices. Mongoose is a cross-platform application
that can be used on Windows, Macintosh OS, Linux, And just to make sure that mongoose is up and
QNX, eCOS, Free RTOS, Android, and iOS. running, issue the following command:

With just over 130 kB source code and an executable # /usr/local/etc/rc.d/mongoose status
footprint of 43 kB on FreeBSD, Mongoose is one of
the smallest web servers available. Mongoose is Output:
written in C.
mongoose is running as pid 4218.
Mongoose is used by several companies in various
industries, including software companies, equipment Or you can find it by listening port:
companies, semiconductor companies, and some
# sockstat -4l
Fortune 500 technology companies. In January 2017,

39
Output: </body>

USER COMMAND PID FD PROTO LOCAL </html>


ADDRESS FOREIGN ADDRESS
Then, run mongoose by typing the following:
root mongoose 4218 5 tcp4 *:8080 *:*
# mongoose -listening_port 127.0.0.1:80
mongoose does not detach from terminal, and it uses
current working directory as the web root, unless -r Mongoose will listen on localhost port 80. If you have
option is specified. It is possible to specify multiple many other interfaces, you can bind mongoose to a
ports to listen on. For example, to make mongoose specific interface.
listen on HTTP port 80 and HTTPS port 443, one
should start it as: mongoose -s cert.pem -p 80,443s

Unlike other web servers, mongoose does not


require CGI scripts to be put in a special directory.
CGI scripts can be placed anywhere.

Disable Directory Listing

You can disable directory listing by typing the


following:

# mongoose -listening_port 127.0.0.1:80


-enable_directory_listing no

Log Access To Website

This command will log all access to log.txt at the


same path as index.html:
Serving A Web Site With Mongoose
# mongoose -listening_port 127.0.0.1:80
First, you can create a simple html file. Let’s call this
-access_log_file log.txt
file index.html , and put it on /usr/local/www
The logs look like this:
# mkdir -p /usr/local/www
127.0.0.1 - - [19/Nov/2017:20:37:49
# cd /usr/local/www
+0330] "GET / HTTP/1.1" 304 0 -
# ee index.html "Mozilla/5.0 (X11; FreeBSD amd64;
rv:56.0) Gecko/20100101 Firefox/56.0"
and put this on index.html :
How To Secure Mongoose Web Server?
<!DOCTYPE html>
There are so many tuning we can add to mongoose,
<html> but two of them are necessary:

<body> Change running user to www

<h1> BSDMAGAZINE </h1> mongoose -listening_port 127.0.0.1:80


-access_log_file log.txt -run_as_user
<p> bsdmag.org </p> www

40
If mongoose crashes, only mongoose will go down full-fledged, fast and minimal web server. Additionally,
not the entire server. you can run mongoose on non-embedded devices.
For example, “corebox.ir” is based on mongoose web
Change www permissions to proper value server.

chmod -R -w /usr/local/www
Useful Links
This command removes write permission so a hacker
https://github.com/GerHobbelt/civet-webserver/wiki/M
can’t run shell on your server.
ongoose-Manual
Change www folder owner
https://linux.die.net/man/1/mongoose
chown -R www:www /usr/local/www
http://in4bsd.com
Only www can add or remove content to this folder.
http://meetbsd.ir
Access Control List

mongoose -listening_port 192.168.1.1:80


-run_as_user www -access_control_list
-0.0.0.0/0,+192.168.3.0/24

This command runs mongoose on 192.168.1.1 port


80, and denies connections from everywhere, except
for 192.168.3.1/24.

Tip: we can’t call this firewall but you can do some


tricks.

Conclusion

Internet of things (IoT) is getting more popular, and


maybe FreeBSD and Mongoose will be a wise
choice. With FreeBSD and Mongoose, you can run a

Meet the Author

Abdorrahman Homaei has been working as a software developer since


2000. He has used FreeBSD for more than ten years. He became involved
with the meetBSD dot ir and performed serious training on FreeBSD. He
started his company, etesal amne sara tehran, in February, 2017. His
company is based in Iran Silicon Valley.

Full CV: http://in4bsd.com

His company: http://corebox.ir

41
DATABASE

Using PostgreSQL Foreign Data


Wrapper to Keep Track of Files

You will learn ...


• How to Use PostgreSQL Foreign Data Wrapper to Keep Track of Files


• Compiling and Installing the Foreign Data Wrapper
• How to Create the Extension
• How to Create the File System Table
• Creating a Snapshot of Files


In this paper, you will see how PostgreSQL can be solution is straightforward: build an application that
extended to pull data out of special /data sources that can perform some DML (Data Manipulation
allow the database cluster to query the outside world Language) against a database.
called Foreign Data Wrapper/s. There are many
implementations of FDW that allow PostgreSQL to Another approach is to use a File System FDW. A
live-query other databases, as well as other data layer that connects your database directly to a File
sources like web pages, files, processes, and so on. System Data Source so that instead of the database
waiting for new data to be stored, it can (to some
This paper proposes a simple setup of a File System extent) pull the data automatically.
FDW that allows a system administrator or an
application to query the filesystem to get information In this article, I will show you how to use the
about files, as well as storing at least one historical Multicorn FDW to achieve a poor-man
version of the latter. The approach presented here is database-SCM.
not meant, to any extent, to substitute the traditional
To execute the code snippets, you need:
and better suited Source Control Management
software (like RCS and alike). Moreover, all the
• git and gmake installed;
examples provided aim only to present the reader
with a simple background on the capabilities that • python version 2.7 or higher;
FDW allow.
• PostgreSQL (a recent version, for this article, I
Introduction used version 9.6.5);

Imagine you want to store some information about • Access to privileged user capabilities (e.g., using
your system configuration file in a database. The sudo).

42
You will also need some basic knowledge about OPTIONS ( wrapper
'multicorn.fsfdw.FilesystemFdw' );
PostgreSQL, how to create a database, a superuser
role, and so on. You can get more information
• Create the File System Table
reading the online documentation or my previous
articles on the matter. Suppose we want to collect information about the
/usr/local/etc/ configuration files. Therefore, you need
Compiling and Installing the Foreign to define a table that will contain various data:
Data Wrapper • the filename;

There are several Foreign Data Wrappers (FDW) • the content (as text);
available for PostgreSQL. In this example, we are
going to use the Multicorn FDW, a set of Python We can elaborate a little more by adding a hash
modules that provide several FDW implementations column, and the date the file has been inspected.
within the same installation. One of such
implementation is the File System FDW. Therefore, the table will be defined as:

# CREATE FOREIGN TABLE usr_local_etc (


The first step is to get the latest Multicorn
implementation. In this example, you will install the full_file_name text,
development version obtained via Git:
content text,
% git clone
git://github.com/Kozea/Multicorn.git service text

) SERVER filesystem_server
Before you can actually compile Multicorn, you need
to adjust it to compile on FreeBSD: OPTIONS( root_dir '/usr/local/etc',

Edit the preflight-chech.sh file, and change the first pattern '{service}.conf',
line with the current available Bash, that is:
content_column 'content',
% head -n1 preflight-check.sh
filename_column 'full_file_name' );
#!/usr/local/bin/bash
Now, you can try it with a simple SELECT statement:
Remember to run gmake instead of make, so:ake
# SELECT service, full_file_name

istall FROM usr_local_etc;

• Create the Extension service | full_file_name


----------+----------------
To create the extension, you need to connect to the
pkg | pkg.conf
PostgreSQL database as superuser, and then load
tcsd | tcsd.conf
the Multicorn extension. After that, you need to define
pcp | pcp.conf
a Data Server, an entry point for external data to
come into the database. pgpool | pgpool.conf
pool_hba | pool_hba.conf
Therefore: idn | idn.conf

# CREATE EXTENSION multicorn; idnalias | idnalias.conf

# CREATE SERVER filesystem_server

FOREIGN DATA WRAPPER multicorn


However, there is a hidden problem: while the user
can run simple stat commands on the filesystem,

43
he/she cannot get the content of the files. In fact, if % id postgres
you try to get the content of a file you’ll get an error:
uid=770(postgres) gid=770(postgres)
groups=770(postgres)
# SELECT service, content FROM usr_local_etc;

ERROR: Error in python: OSError


% sudo pw usermod -n postgres -G _tss
DETAIL: [Errno 13] Permission denied:
'/usr/local/etc/tcsd.conf'

The problem arises from the fact that % id postgres


/usr/local/etc/tcsd.conf has no world-readable flag. A
quick solution is to allow another user to read by uid=770(postgres) gid=770(postgres)
groups=770(postgres),601(_tss)
either changing the file mode (e.g., 644) or to invite
the user running the PostgreSQL server to the group Once the above problem is solved and the trick
of the file owner (in this case _tss), and setting the applied to any problematic file, you can query the
mode to 640. table to get living data from the underlying file system
(See Listing 1).

Listing 1. Living data

# SELECT service, content

FROM usr_local_etc

WHERE service = 'pkg';

service | content

---------+---------------------------------------------------------------------

pkg | # System-wide configuration file for pkg(8) +

| # For more information on the file format and +

| # options please refer to the pkg.conf(5) man page +

| +

| # Note: you don't need to have a pkg.conf file. Many installations+

| # will work well with no pkg.conf at all or with an empty pkg.conf +

| # (other than comment lines). You can also override any of these +

| # settings from the environment. +

| +

| # Configuration options -- default values. +

| +

| #PKG_DBDIR = "/var/db/pkg"; +

| #PKG_CACHEDIR = "/var/cache/pkg"; +

...

44
ORDER BY service
Creating a Snapshot of files
WITH NO DATA;
Using the Foreign Data Wrapper, the database will
query the filesystem each time you issue a query, When you decide to pull updated data from the
and this means the data in the usr_local_etc table will filesystem into your snapshot, do the following:
change accordingly to changes performed outside
# REFRESH MATERIALIZED VIEW
the database. usr_local_etc_snapshot;

If you need to keep a snapshot of the file content, Let's check that the data into the view is coherent
let's say to implement a poor-man file control with what is in the database (See Listing 2.)
management, you can use a materialized view.
Also, check the MD5 outside of the database:
A materialized view is a view over data that is
populated by a snapshot of data pulled out from a % sudo md5 /usr/local/etc/pkg.conf
~
table. Each time you refresh the view, new data is
pulled out of the table. Otherwise, the view will MD5 (/usr/local/etc/pkg.conf) =
provide a static snapshot of the data at the time it 84925257b233f69068214cdaf3f630a2
was last updated.
As you can see, the MD5 is the same. Therefore, the
To better explain it, let's create a materialized view to data in the materialized view does represent the
get the content of the files into the file system: current snapshot of the filesystem.

# CREATE MATERIALIZED VIEW Now, imagine you modify the pkg.conf file so that it is
usr_local_etc_snapshot AS
updated outside of the database:
SELECT service, full_file_name, content,
% sudo emacs /usr/local/etc/pkg.conf
current_timestamp AS ts,
...
md5( content ) AS hash
% sudo md5 /usr/local/etc/pkg.conf
FROM usr_local_etc
MD5 (/usr/local/etc/pkg.conf) =
a82431a939e221dd5fc8b702542a30d4

Listing 2. Database

# SELECT full_file_name, hash, ts

FROM usr_local_etc_snapshot;

full_file_name | hash | ts

----------------+----------------------------------+-------------------------------

pkg.conf | 84925257b233f69068214cdaf3f630a2 | 2017-11-09 16:54:30.668574+01

...

45
Listing 3. Reports

# SELECT full_file_name, hash, ts

FROM usr_local_etc_snapshot

WHERE service = 'pkg';

full_file_name | hash | ts

----------------+----------------------------------+-------------------------------

pkg.conf | 84925257b233f69068214cdaf3f630a2 | 2017-11-09 16:54:30.668574+01

Then, let's see what the materialized view reports FROM usr_local_etc_snapshot snapshot
(See Listing 3 above).
WHERE snapshot.hash <> (

As expected, it does still report the old hash, the SELECT hash
data within the materialized view which has not been
modified. What this means is that the content column FROM current

of the view also has a track of the old (i.e., before WHERE service =
editing) content of the same file, allowing for a quick snapshot.service )
(and dirty) restore of the file content.

What has Changed? UNION

The fact that the materialized view contains the


snapshot of the filesystem allows for querying the
SELECT service, ts AS ModifiedSince
status of the filesystem itself against the previous
(last) snapshot: FROM usr_local_etc_snapshot snapshot

# WITH current AS ( WHERE NOT EXISTS (

SELECT service, md5( content ) AS hash SELECT service

FROM usr_local_etc FROM current

) WHERE service =
snapshot.service );
SELECT service, ts AS ModifiedSince

Meet the Author

Luca Ferrari lives in Italy with his beautiful wife, his great son, and two female cats. Computer science
passionate since the Commodore 64 era, he holds a master degree and a PhD in Computer Science. He is
a PostgreSQL enthusiast, a Perl lover, an Operating System passionate, a UNIX fan, and performs as much
tasks as possible within Emacs. He considers the Open Source the only truly sane way of interacting with
software and services. His website is available at http://fluca1978.github.io

46
The above query is made up of three parts:

• current is a CTE (Common Table Expression), a


sub-query that computes the hash on the current
file system data (i.e., querying the FDW);

• the first SELECT extracts all files that have been


modified since the last snapshot (i.e., since the last
REFRESH MATERIALIZED VIEW);

the second SELECT extracts all files deleted since


the last snapshot.

Running the above query provides the following


result:

service | modifiedsince

---------+-------------------------------

pkg | 2017-11-09 16:54:30.668574+01

meaning that the pkg service has been modified


since the last time it was taken into the materialized
view.

Conclusions
This article has demonstrated a concrete application
of PostgreSQL Foreign Data Wrappers feature to
allow the database to query other data sources, in
particular, a file system to get and track file
information. There are a lot of FDW implementations
allowing even more, like web browsing and parsing,
other database querying, web service interactions
and so on. These can all be used as building blocks
for a more complex layer of data management.

References
PostgreSQL web site: http://www.postgresql.org

PostgreSQL FDW:
https://wiki.postgresql.org/wiki/Foreign_data_wrappers

Multicorn FDW: http://multicorn.org/

47
ADMIN

Free RDP Configuration


In this article, you will learn how to setup HP t620 In the window that has just opened, click on the
Thin Client with Linux Kernel. First, we must enable “Switch to ThinPro” button and allow the ThinPro OS
the Admin Mode. To do this right-click anywhere on to load. This shouldn’t take too long. When the OS
the desktop. Then, click on “Switch Admin/User has finished loading, proceed with the configuration.
Mode”.

If this is your first time, you must set a password


which you will use to access the Admin Mode. As
soon as you are logged in, a red border will appear
Displays
around the desktop to signalize that you’re in the
To edit our display settings, let’s click on the settings
Admin Mode.
button in the taskbar. Then, hover over “Peripherals”
Now, we can start with the configuration. I suggest and click on “Display Preferences”.
switching to the ThinPro OS. To do this, we have to
access the settings through the taskbar, hover over
“Setup” , and then click on “Customization Center”.

At this stage, we can choose which display to be the


primary display and which one to be the secondary
display. Also, we can choose the direction in which
the screen should extend.

48
Setting up a remote desktop
connection
To start a remote desktop connection, we must open
the connection manager. We can open it by clicking
on the following symbol in the taskbar:

The next step is deciding what kind of remote


Keyboard connection we would like to establish. For this
example, let’s choose a custom connection.
By default, the keyboard layout is set to US (United
States). To change the keyboard layout, right-click on
the “US” letters at the bottom right of the primary
display. Then, click on “Keyboard layout”

In the active window, choose your preferred keyboard


layout in the “Standard Keyboard” dropdown.

Let’s give the connection a good name, so that we


know to which PC to connect. I recommend using
other PCs’ IP as the name of the connection. Let’s
start with the command we want to run. First, we
must decide which program we’re using. Let’s
choose xfreerdp, and type it in the command box
accordingly. Now we could just enter the IP of the PC
we wish to connect to and start the connection, but if
we do that, the resolution will be quite uncomfortable
to look at and only one monitor will be used. That’s
not necessarily bad if we only want to use one
monitor for the remote connection. However, if we
want to use more, we must add some commands. An

49
important thing to mention is that the order of the In the end, the complete command should look
commands we enter matters. That’s why we’ll start something like this:
by fixing the resolution.

Resolution

To get rid of the nasty resolution, type the following


commands right after the “xfreerdp” in the command
box:

+aero - This will manage the desktop composition of


the remote connection.

+smart-sizing - Scales the remote desktop to the


window size.

+fonts - Enables smooth fonts, this will make the


resolution much more comfortable.

-f - Full screen mode

The following commands are used to decide the


resolution of the window:

/monitors:0,1 This will determine how many


monitors will be used. Make sure to start counting
from zero.

/multimon This enables you to use multiple


monitors.

/w: Determines the width of the window.


Meet the Author
/h: Determines the height of the window.
Loris Zimmerman is an IT student who works at
OBRO AG in Switzerland. He is always
/size: Determines the screen size.
interested in computers and related stuff. He
(<width>x<height>)
started working in IT one and a half years ago. If
/compression:off Disables compression you wish to contact him, send an e-mail to:
loris.zim@hotmail.com
/bpp: Defines the color depth

Remaining commands:

/sound Enables sound from the


connection.

/v: *IP-Address* Here, we must enter the IP of the


device we want to connect to.

50
Among clouds
Performance and
Reliability is critical
Download syslog-ng Premium Edition
product evaluation here

Attend to a free logging tech webinar here

www.balabit.com

syslog-ng log server


The world’s first High-Speed Reliable LoggingTM technology

HIGH-SPEED RELIABLE LOGGING


above 500 000 messages per second
zero message loss due to the
Reliable Log Transfer ProtocolTM
trusted log transfer and storage

51
The High-Speed Reliable LoggingTM (HSRL) and Reliable Log Transfer ProtocolTM (RLTP) names are registered trademarks of BalaBit IT Security.
INTERVIEW

Interview with
Abdorrahman
Homaei
Can you tell our readers about yourself and your role nowadays?

Currently, I am busy with daily administration tasks and CoreBOX development which are getting harder and
intense. Besides, I have a company located in Iran Silicon Valley and have to manage my enterprise.


How you first got involved with programming and the FreeBSD world?

About 12 years ago, I was an active and professional windows developer. Secure programming was what
introduced me to the FreeBSD world. On FreeBSD, everything was orderly and documented. I think FreeBSD is
the developer’s paradise. I wrote my first application on FreeBSD 12 years ago, and as you can imagine, there
was no headache like windows, no undocumented API, and no crashing.

While having a wide field of expertise, please tell our readers on which area you put the most emphasis,
and why?

In my view, security is the most important expertise irrespective of the OS you are using. It doesn’t matter how
hard you interact with that OS, with shell or with 3D GUI, or who you are, if someone can hack you, then your
business is not reliable and you are the loser. Hence, I put much emphasis on security since it is the most critical
area.

What was your best work? What was the idea behind it? What was its purpose?

My best work was migrating my desktop to FreeBSD. Using FreeBSD as desktop is so complicated. Every day,
you face serious challenges but after a while, you will learn everything and become a geek. Using FreeBSD as
desktop teaches you how to solve any problem.

52
What is your the most interesting programming issue you have encountered, and why?

Migrating to FreeBSD was not easy. I was a device driver developer, but when you migrate to other
OS(FreeBSD) and you cannot even work with its command line, it’s so hard to develop a simple application.
Therefore, programming a device driver was impossible.

What tools do you use most often, and why?

CSH is my best friend because I can do everything in shell and it gives me a good feeling. I also frequently use
shell utilities like SSH. When it comes to development, I use C++ , QT , and many more.


What was the most difficult and challenging implementation you’ve done so far? Could you give us
some details?

I think you are talking about CoreBOX exactly. CoreBOX has a brilliant idea behind. Using FreeBSD as a
role-based hypervisor is state of the art. In the beginning, you must choose your mechanism to control the
hypervisor. You can create a web-based access or application access like many others. Selecting each one will
force you to learn how to authenticate users, send and receive data.

CoreBOX neither uses web-based access nor a custom application. CoreBOX is clientless, and you can connect
to virtual desktop.

Can you tell us more about your company?

My company’s name is “etesal amne sara tehran”. I have a 5 year old daughter, and I named the company after
her name, Sara. My company is based in Iran Silicon Valley. Our main domain is virtualization, and we use
FreeBSD as our infrastructure.

What is CoreBOX?

CoreBOX is a Type-2 FreeBSD-Based High-Performance hypervisor, designed for building carrier-grade virtual
infrastructure.

What future do you see for FreeBSD and other OSes? Can you tell us about your favorite features in the
new releases?

It seems FreeBSD is more focused on single-board computers. Many companies like FreeBSD because of its
liberal license. I hope we will see more from FreeBSD and NetBSD in IoT market. Support for the Allwinner A13
board has been added, and it’s interesting.

Do you have any specific goals for the rest of this year?

My goal is to add CoreBOX new features like adding new resource scheduler and auto-tuning.

What’s the best advice you can give to the BSD magazine readers?

FreeBSD is an enterprise-class operating system which is reliable and secure. The only way to learn FreeBSD is
to install it on your desktop.

Thank you


53
INTERVIEW

Interview
with
Oleksandr
Tymoshenko
Can you tell our readers about yourself and your
role nowadays?

My name is Oleksandr Tymoshenko. I am a software


developer with more than 15 years of experience. HPUX system, I picked up some basic C knowledge.
Over these years, I worked on a number of projects In my second year at the university, I got a job at
in various fields including Linux PDA software, SMS university's network operations center. They used
center for GSM telco, servers for multiplayer games, FreeBSD on most of the systems, and that was the
IP PBX box, and firmware for VoIP phones. first time I tried this OS. I think it was FreeBSD 3.0. I
did some sysadmin work for NOC, experimented
I am have been a FreeBSD committer since 2008 with kernel hacking, just examples mostly, nothing
when I started as a FreeBSD/MIPS developer, but really exciting.
switched camp to FreeBSD/ARM sometime later.
My first commercial FreeBSD experience was
At the moment, I work for Dolby Laboratories as a porting drivers for telephony cards (PCI boards you
senior software developer, building conferencing could connect to phone lines) for small IP PBX
products. We don't use FreeBSD in our products, startup. While working for this startup, I came across
FreeBSD is just my hobby. MIPS boards and got interested in its architecture. I
thought that it would be nice to run FreeBSD on
How you first got involved with programming and them. So I found some initial work in this area done
the FreeBSD world? by Juli Mallet, and started experimenting with things.
That was the start of my active contribution to a
I was fascinated by computers between theage of 8 FreeBSD project.
to 9, but only got a chance to work with them when I
switched school at 14. I started learning Turbo
Pascal, then 8086/80286 assembly and couple years
later, after I had got access to local university's

54
While having a wide field of expertise, please tell session with build shell, serial terminal and editor,
our readers on which area you put most then another session for Bugzilla work. This
emphasis, and why? environment runs either on either a server or desktop
machine, and I can SSH to it and reconnect to the
Since I don't base my career on FreeBSD work, I session from anywhere. On my laptop, I use i3,
pick whatever is fun to toy with. I like working on tile-WM that goes well with tmux/vim combination. I
hobbyist ARM boards like Raspberry Pi and use subversion for FreeBSD stuff, git for personal
Beaglebone products families. Hardware-wise, they projects, and Perforce at work. I'm a long-time fan of
are simple enough so you can easily master mutt mail client which I use for most of my personal
thewhole architecture. There are no complex clock and open-source related communications.
domains or super-intricate power management Communication software: irssi for IRC and profanity
controls, and yet they're powerful enough so you as irssi-like Jabber client.
don't have to fight for every kilobyte of RAM. You
can easily extend them with external devices using What was the most difficult and challenging
I2C, GPIO or SPI buses. They're pure tinkering implementation you’ve done so far? Could you
material and a lot of fun. give us some details?

What was your the best work? What was the idea That would be porting u-boot and FreeBSD to
behind it? What was its purpose? Raspberry Pi. I wanted to create FreeBSD port for Pi
but to work on it, I needed a way to netboot device.
There is no single project I can point at and call it a It’s essential when you start to work on embedded
magnum opus. I'd say the cumulative contributions device support for FreeBSD. Board support is added
to FreeBSD is my best work so far. At least most to kernel config step by step. First, you check if
impactful. They're building blocks and stepping control is passed to kernel entry point by printing
stones for other people's projects. It's very something to serial console in _start method.
rewarding to see your work being used by other Thereafter, you move debug output further into
developers and hobbyists in most unexpected ways. kernel initialization routine and so on. Sometimes
Or how once buggy and unstable code after some this process is smooth and requires just a few
time and effort becomes a rock-solid platform for iterations, but often it's multiple fix/build/boot cycles.
someone else's product. Without netboot, you have to extract SD card, write
new kernel to it, put it back, powercycle board, and
What is your the most interesting programming check results. Wash, rinse, and repeat. It is slow,
issue you’ve encountered, and why? dull, and tedious. With netboot, all you need is to
build the kernel, copy it to TFTP server, and
To be honest, I don't have any interesting
powercycle board. Board will get address by DHCP,
debugging war stories. Debugging is exciting in a
download kernel from TFTP server, and pass control
puzzle-solving way during the process. However,
to it. Normally, boards come with versatile boot
even when hours of painstaking search boils down
loader called u-boot (which is de-facto standard
to one-liner fix: missing cache sync operation,
these days) built by hardware vendors like Freescale
memory barrier or in worst case, an extra semicolon
or Marvell. Raspberry Pi back then didn't have it,
in a wrong place. It's only entertaining for a day or
and proprietary boot code from Broadcomm could
so.
only load a kernel from SD card. So I had to port
u-boot to Rasperry Pi first using that slow and
What tools do you use most often, and why?
tedious process. There was no way around it. For
For day to day work, I mostly use tmux + vim with netboot, I needed a network card driver. Luckily,
few plugins as dev environment, and ack u-boot had it. However, I hit a snag beacsue USB
(textproc/ack) for code search. tmux offers powerful controller driverwas absent. Additionally,there was
features for organizing workspace. I group windows no datasheet and no way to put USB protocol
in sessions by theme, i.e., ARM work in progress analyzer between USB host and ethernet controller

55
to see if my changes affect anything. They were both I am terrible at making predictions. I think FreeBSD
integrated on a board. Hence, I used Linux driver as is gaining momentum as "true" server/development
a reference and tried to implement a minimal subset UNIX. With systemd haunting major Linux distros,
of functionality to get ethernet working. Every people start looking for alternatives and since
iteration involved exchanging SD card between Pi FreeBSD has more conservative approaches and
and desktop. I almost quit a couple of times but out features like ZFS, it makes a good candidate in
of sheer stubbornness, I kept experimenting and server space. But again, this might just as well be
finally was able to transfer the kernel from TFTP my echo chamber.
server. The code was terrible, but it was good
enough to unblock my FreeBSD work. Moreover, there is ongoing effort to use FreeBSD as
an educational OS which I support with all my heart.
Can you tell us more about the Bugmeister team? I think FreeBSD is an example of good engineering
What did you do there? and an excellent primer in OS design.

Bugmeister team works on keeping bugtracker I use 12-CURRENT on my laptop, so I don't wait for
system running, and on making bug reporting and new releases. Most exciting recent feature for me
tracking as easy as possible. The less effort required was drm-next-kmod port. Now I don't have to build
to work on bugs, the higher the chances they'll be custom kernel branch to get decent Xorg
worked on. FreeBSD used to use GNATS performance from my Kabylake-based Thinkpad.
bugtracking system, much dated and not super
flexible to put it mildly. 4-5 years ago, it was decided Do you have any specific goals for the rest of this
that the project should switch to a better tool and year?
the Bugmeister team was responsible for picking
I want to add VideoCore interface support for
and deploying it. I was one of the developers
Raspberry Pi 3. Pi 3 is 64-bit device while older Pi's
working on that task. I resigned from the team in
are 32-bit, and some work required to port advanced
2014, few months before the actual switch
features like audio or OpenGL. It almost works but
happened. I volunteered back in 2016 to do some
as usual, there is some weird bug which requires a
routine administrative tasks like helping users with
long enough stretch of time to sit and work on it.
passwords, killing spam PRs, and making minor
modifications to Bugzila codebase.
What’s the best advice you can give to the BSD
FreeBSD uses a customized version of Bugzilla. The magazine readers?
Bugmeister team is responsible for maintaining this
Use the contributions to open-source projects as
version and adding new features to automate some
learning opportunities. Find an area you're interested
of the tedious work or adopt it to FreeBSD
in, submit patches, ask for feedback, ask questions,
workflows.
and ask for references to code/papers/books. Try
mailing lists, try IRC/Slack channels, and try emailing
Can you tell our readers more about your
the author. Sometimes there will be no response,
commits to FreeBSD?
sometimes people will forget to follow up, or you
I do not commit to FreeBSD much these days. I am might come across a toxic person at some point.
busy with a daytime job, and there is just not enough Whichever the case, try not to get discouraged.
spare time to get back into FreeBSD flow. I do There are a lot of people in open-source community
occasional fixes for some FreeBSD/ARM drivers, who are willing to share their knowledge and
commit patches submitted by contributors. But experience: find them, grow your network, and keep
nothing major, unfortunately. those patches coming.

What future do you see for FreeBSD and other Thank you
OSes? Can you tell us about your favorite
features in the new releases?

56
READ ONLINE WWW.BSDMAG.ORG

57
COLUMN

On October 1st, the Network Enforcement Act took effect in Germany. This
creates a legal framework for censorship of the Internet. As more and more
governments take the hammer of censorship to content, what are the
ramifications for free speech, but more importantly, has the Internet come
of age?

ROB SOMERVILLE

As a writer and technologist, I fall into the same category as musicians, artists and philosophers have done
throughout the ages. Torn between commercial reality and freedom of expression, I have to continually examine
any output to ensure that it treads a fine line between entertainment, education, and offence. More often than
not, a lot of what I want to say has to be responsible and truthful, yet removes the core emotion and guttural
meaning behind the message. Or to put it another way, bring race, religion, politics or money into the equation,
you leave yourself open to criticism and / or censure. Which is fair enough, if you have an all-out bias or level of
opinion that firmly places you in the category of bigot, bore or banshee. I would agree that the first two
categories deserve a certain degree of civilised opprobrium, the latter much less so.

A banshee, according to Wikipedia, is a female spirit in Irish mythology who heralds the death of a family
member, usually by wailing, shrieking, or keening. This is not to be confused with a troll or other general
troublemaker, as the wail is not so much to cause irritation to the ears but to signal to the wider community that a
tragedy, an injustice has taken place. It is not without significance that women are given a wide tolerance to
express grief in any civilised society, as they are often economically the individuals who have to bear the
consequences of the death of a child or a partner. In this egalitarian age, I would humbly submit that when
pushed to the limits, men are also capable of such expression, albeit with fewer tears. And that is the problem,
we have opened a Pandora’s box of communication, where all sides can come to the table, argue, make
adjustments, refine their weapons, and come back with a stronger case to batter their “opponents” into
submission.

Free speech is an integral part of any civilised society. I am loath to say democratic here, as many individuals
who have a valid complaint or axe to grind have often been ignored by the wheels of justice. The Internet is a
powerful weapon in this regard, as the hypocrisy and inefficiency of government and politics at large can be
exposed for what they are. Given a revelation, a narrative, the only response that the guilty party can respond
with is either a barrage of PR flak, an attempt to discredit the author, or worse still. The Internet is populated by
lunatics of course, with their conspiracy theories and psychologically damaged rantings.

58
Some of the most convincing arguments I have ever experienced have been propositioned from the online
community, and I have been online long before it became a dot on the political radar. There is something about
writing, typing, that engages a different part of the brain. It gives the author enough freedom to self-edit, say
things more considered than a one-to-one conversation. In the real world, you might be a 6-foot man with a
shaved head, considerable muscle and tattoos, and me, a 5-foot female with brittle bone syndrome. Online, all
we have is a handle and accountability to the intelligence services and governments that monitor every byte of
traffic. That is a game changer, and the powers that be are scared, really scared. And not just of the trolls and
genuine lunatics.

All of this boils down to the rule of Three’s. We can work inside the system, outside the system, or consider a
different path. And here, I will be controversial. The Internet is spiritual. Unless we engage that side of ourselves,
all is naught. We are but cogs in a wheel, slaves to a machine, meaningless chemical factories that are doomed
to rot. A lot has been argued about fake news, credibility and the like, but we are now facing a crisis of faith.
Where the old era was concerned with faith in God, we are now looking at faith in information and content
providers. Faith, like everything else, has been neatly divided, then parceled up and placed in a corresponding
container. Rather than judging the actions of individuals, those in control have been bewitched by judging their
thoughts and motives – an affront to any human being.

The closing of these doors brings this whole problem into a difficult and sensitive arena – politics. Currently,
Russia is under attack from Western media (and indeed government) for being ultimately, a purveyor of “fake
news”. As far as propaganda is concerned, I think the West has a lot to answer for. All of this gibberish (and I
can’t think of any better word to describe it) goes back, as far as I can remember as a child, to when the USSR
managed to take the wind out of the sails of the USA by placing a working satellite in orbit, and the first
unmanned probe on the moon. Bereft of physical manifestation, the only recourse was demonisation and
character assassination. The result of losing face was not going to be pretty, and as a result, the cold war
continued on for many more years. Who knows what world we would be living in if dialogue, engagement, and
discourse had been the result of this technological achievement?

Technology, like people, goes through growth stages. The wonder and innocence has now moved on, and we
are left at best with an awkward teenager with spots or at worst, a rich individual going through a midlife crisis.
The true players are making their presence felt at the watering hole of power, and no matter how dirty their
hooves, legs or bodies are, they desire to bathe in the cool waters where others drink. It is time for a clean
water act.

59
3BDLNPVOUOFUXPSLJOHTFSWFS
%FTJHOFEGPS#4%BOE-JOVY4ZTUFNT

6QUP(CJUT
SPVUJOHQPXFS
%FTJHOFE$FSUJmFE4VQQPSUFE

,&:'&"563&4 1&3'&$5'03

/*$TX*OUFMJHC 
ESJWFSXCZQBTT #(1041'SPVUJOH
)BOEQJDLFETFSWFSDIJQTFUT 'JSFXBMM65.4FDVSJUZ"QQMJBODFT
/FUNBQ3FBEZ 'SFF#4%QG4FOTF
*OUSVTJPO%FUFDUJPO8"'
6QUP(JHBCJUFYQBOTJPOQPSUT $%/8FC$BDIF1SPYZ
6QUPY(C&4'1 FYQBOTJPO &NBJM4FSWFS4.51'JMUFSJOH

DPOUBDUVT!TFSWFSVVT]XXXTFSWFSVVT
60
/8UI4U.JBNJ -']  


You might also like