Check Failover Status in ISC-DHCP Server: A Detailed Guide

In today’s interconnected world, ensuring high availability of services like DHCP is crucial. ISC-DHCP server, a popular choice for managing network addresses, supports a failover mechanism. This blog post will guide you through the process of checking the synchronization status between primary and secondary servers in a failover configuration.

Setting Up Failover in dhcpd.conf

Before diving into the synchronization status, let’s set up the failover in the dhcpd.conf file. Here, you need to specify the OMAPI port, which is used for management tasks such as checking the failover status. An example configuration is shown below:
omapi-port 7911;

Restart the Service

sudo systemctl restart isc-dhcp-server

Using omshell to Check Failover Status

start the omshell

omshell

and use following commands

server localhost
connect
new failover-state
set name = "internal-network"
open

important note set name , this is your failover name that is defined in the dhcpd.conf
the two important lines from the output above are these

partner-state = 00:00:00:02
local-state = 00:00:00:02

Understanding the Failover Status Output

What is the meaning of this hexadecimal values, if you have other values than 02, than you should check deeper whats wrong.
Indicates the present state of the DHCP server in this failover relationship. Possible values for state are:

1 – startup
2 – normal
3 – communications interrupted
4 – partner down
5 – potential conflict
6 – recover
7 – paused
8 – shutdown
9 – recover done
10 – resolution interrupted
11 – conflict done
254 – recover wait

You can find this in the documentations also
https://kb.isc.org/docs/isc-dhcp-44-manual-pages-dhcpd

cheap homelab server

In this blog post, I’m sharing my experience in assembling a budget-friendly yet powerful system designed for demanding virtualization tasks. This is a server with 28 threads more than enough for most tasks. I chose the 2011-3 platform because there are some really exciting motherboards from China.

The Mainboard: Machinist MR9A The cornerstone of my project is the Machinist MR9A Mainboard, purchased for 55 on AliExpress. This mainboard offers two M.2 slots for additional storage expansion – perfect for systems with high storage demands. It supports PCIe 3.0, ensuring high data transfer rates and compatibility with newer hardware components. The chipset is specifically designed for Intel Xeon processors of the E5-2600 V3/V4 series, making it an excellent choice for my purpose.

Memory: Crucial DDR4-2133 DIMM The backbone of the memory system consists of four 16GB Crucial DDR4-2133 DIMM modules (CT16G4RFD4213), providing a total of 64GB of capacity. Acquired for 40 on eBay, they offer ample resources for virtualization tasks.

The CPU: Intel Xeon E5-2697V3 The Intel Xeon SR1XF E5-2697V3 processor, a component of the Haswell-EP generation, features 14 cores and a base frequency of 2.60 GHz. It’s notable for offering 40 PCIe 3.0 lanes, providing vast expansion capabilities and flexibility for system builds. The processor’s detailed specifications can be viewed on Intel’s ARK page. This CPU was purchased for 23.50.

Storage: Utilizing Existing SSDs For storage, I’m using my existing M.2 SSDs – one 1TB and a 256GB version – which provide ample space for virtual machines and data. The system disk is an older 256GB SATA SSD, which is sufficiently fast for the system’s needs. While there are plans to upgrade to a larger M.2 SSD in the future, these current storage solutions are more than adequate for the initial setup.

Graphics: Matrox Millennium P690 A Matrox P69-MDDE128LA1F Millennium P690 graphics card with 128Mb GDDR2 was selected for its minimal PCIe usage, freeing up remaining slots for SSDs. The cost: 15.

Cooling: Be quiet! AIO A Be quiet! AIO water cooler for 25 ensures the CPU remains efficiently cooled under load. That was new old stock from a friend.

Case and Power Supply To house the components, I used a case I already had. Additionally, a pre-owned FSP 500 power supply was utilized for this build. Both of these components were leftovers from previous projects, contributing to the cost-effectiveness of this setup.

Software: Ubuntu 22.04 LTS and KVM with Libvirt On the software side, the server runs Ubuntu 22.04 LTS, a robust and stable operating system ideal for server environments. Virtualization is managed using KVM with Libvirt, providing a powerful and flexible platform for running multiple virtual machines.

Conclusion: This system demonstrates that with a budget of under 160, one can build a powerful virtualization system. The upcoming weeks will show how it performs in practice.

Summary of Components and Prices:

  • Mainboard: Machinist MR9A (with 2x M.2 slots, PCIe 3.0, Intel Xeon-compatible chipset) – 55
  • Memory: 4x Crucial 16GB DDR4-2133 DIMM – 40
  • CPU: Intel Xeon E5-2697V3 – 23.50
  • Graphics: Matrox Millennium P690 – 15
  • Cooling: Be quiet! AIO – 25

Total cost: approximately 158.50

Some picture for how looks like the cheap home lab setup

Installing the HWE Kernel on Ubuntu

Installing the HWE Kernel on Ubuntu

In the Linux world, keeping your system up-to-date is crucial for security and access to the latest features. One way to do this on Ubuntu is by using the Hardware Enablement (HWE) kernel. This post will guide you through the simple process of installing the HWE kernel on Ubuntu 20.04 LTS and Ubuntu 22.04 LTS.

What is the HWE Kernel?

The HWE kernel is an updated version of the kernel that comes with new Ubuntu Long Term Support (LTS) releases. The primary advantage of using the HWE kernel is that it provides newer kernel features and hardware support to users who are using an LTS release.

Installing the HWE Kernel

Installing the HWE kernel is straightforward. Depending on your Ubuntu version, you can use one of the following commands:

For Ubuntu 20.04 LTS, open a terminal and type the following command:

sudo apt install linux-generic-hwe-20.04

For Ubuntu 22.04 LTS, use this command instead:

sudo apt install linux-generic-hwe-22.04

Please note that you need to have administrative privileges to run these commands. You’ll be asked for your password during the process.

Important Information About the HWE Kernel

Please be aware that the HWE kernel comes with 6 months of support. After this period, you’ll receive a new HWE kernel through the standard update process. This cycle allows you to continually benefit from the latest kernel improvements while staying on the LTS release.

For more information about the Ubuntu kernel release cycle, visit the official Ubuntu Kernel Release Cycle page.

bash script detect os and execute command

If you have multi servers Linux Distribution and should check only Ubuntu. Then you can use this script to select the os and if only match the OS then execute the command.

##!/bin/bash
if [ -e "/etc/issue" ] ;
then
issue=`cat /etc/issue`
set -- $issue
if [ $1 = "Ubuntu" ] ;
then
    nginx -v 2>&1 | grep -o 1.* | cut -c1-6 
fi
fi

Cisco VPN server alternative

You are tired from openvpn performance issues. You want to use more opensource software? You wan to replace your Cisco ASA but don’t want to give up annyconnect VPN client software?
You want a reliable VPN server for your business? I found a solution for your requirements.

OpenConnect VPN Server called OCSERV

https://ocserv.gitlab.io/www/

You can use the AnnyConnect client to dial in to OCSERV VPN server or openconnect VPN client.

On Debian/Ubuntu
apt install ocserv

You have installed the VPN Server but in Enterprise enviroments that is not enough for security. You want to use this for hundred or thousand of employees.

I want to show you my configuration of OCSERV and RADIUS integration with Privacyidea a two factor opensource solution.

If you want to use OCSERV with RADIUS please read this first

https://ocserv.gitlab.io/www/recipes-ocserv-authentication-radius-radcli.html

You have to compile radcli from source first without this you have no RADIUS functionality.

Look at https://github.com/radcli/radcli/releases for the latest version

How to compile
https://ocserv.gitlab.io/www/recipes-ocserv-radcli-installation.html

Fill the information for your radius server under

/etc/radcli/radiusclient.conf


nas-identifier fw01
authserver 10.10.10.50
acctserver 10.10.10.50
servers /etc/radcli/servers
dictionary /etc/radcli/dictionary
default_realm
radius_timeout 10
radius_retries 3
bindaddr *

cat /etc/radcli/servers

# Server Name or Client/Server pair            Key             
## ----------------                             ---------------
#
#portmaster.elemental.net                       hardlyasecret
#portmaster2.elemental.net                      donttellanyone
#
## uncomment the following line for simple testing of radlogin
## with freeradius-server
#
#localhost/localhost                            testing123
#
10.10.110.60 yourradiussecrectkey

After you have compile radcli on the system you can choose to install ocserv from the distribution repository or to compile it from source. I have use the repository from the distribution.

add following to the

ocserv.conf


auth = “radius[config=/etc/radcli/radiusclient.conf,groupconfig=true]”

Fixing some errors…

custom-header = “X-CSTP-Client-Bypass-Protocol: true”

Add your own certificate for your domain

server-cert =
server-key =

VPN Pool

ipv4-network = 10.10.100.128
ipv4-netmask = 255.255.255.128

Add route to network that you want to reach form the vpn server

route=

Asus hyper m.2 x16 card v2 put it on a server

Here we go, I have to put these card on to my server with C620 chipset. I need fast local storage.
The card is not very expensive but i need 4×4 TB SSD on these server. So we started with a simple test. I bought 4x 16 GB Optane SSD for under 14 $ per piece. When it runs with this SSD so it will run also with the big ones.
That this runs we need a chipset that supports bifurcation with PCIe. With bifurcation you can split your 16x PCIe lane into 2×8, 4x4x4x4x or 8x4x4x. We need 4x4x4x4x every SSD needs 4x PCIe lanes. Would you read more about this topic look here https://blog.donbowman.ca/2017/10/06/pci-e-bifurcation-explained/

I have not really enough space. On this server there are also 4x 2080 TI graphics cards. It was very close that the card fit in it.

Next step was to configure the BIOS(UEFI) to talk correctly to this card. The automatic mode is not working. In automatic mode it detect only one SSD, the first one on the Asus card.

I used on the motherboard slot 10, this information is important to configured PE3 on CPU2 correct. We look in the manual of the motherboard to see which slot is connected with the CPU. Slot11 is not working there is not enough PCIe lanes only 4x. We need 16x to split into 4x4x4x4x
Let´s jump in to the BIOS.

On the BIOS follow the rabbit pictures.

We have to choose the right CPU. We use PE3 on CPU2

Change the IOU2 from Auto to 4x4x4x4, every SSD should runs with 4x PCIe lanes.

That was it.

Check under Linux with the command dmidecode -t 9