sometimes you got an error with ssh-agent mykey
Could not open a connection to your authentication agent.
then try this
exec ssh-agent bash
found here
sometimes you got an error with ssh-agent mykey
Could not open a connection to your authentication agent.
then try this
exec ssh-agent bash
found here
You are tired from openvpn performance issues. You want to use more opensource software? You wan to replace your Cisco ASA but don’t want to give up annyconnect VPN client software?
You want a reliable VPN server for your business? I found a solution for your requirements.
OpenConnect VPN Server called OCSERV
https://ocserv.gitlab.io/www/
You can use the AnnyConnect client to dial in to OCSERV VPN server or openconnect VPN client.
On Debian/Ubuntuapt install ocserv
You have installed the VPN Server but in Enterprise enviroments that is not enough for security. You want to use this for hundred or thousand of employees.
I want to show you my configuration of OCSERV and RADIUS integration with Privacyidea a two factor opensource solution.
If you want to use OCSERV with RADIUS please read this first
https://ocserv.gitlab.io/www/recipes-ocserv-authentication-radius-radcli.html
You have to compile radcli from source first without this you have no RADIUS functionality.
Look at https://github.com/radcli/radcli/releases for the latest version
How to compile
https://ocserv.gitlab.io/www/recipes-ocserv-radcli-installation.html
Fill the information for your radius server under
/etc/radcli/radiusclient.conf
nas-identifier fw01
authserver 10.10.10.50
acctserver 10.10.10.50
servers /etc/radcli/servers
dictionary /etc/radcli/dictionary
default_realm
radius_timeout 10
radius_retries 3
bindaddr *
cat /etc/radcli/servers
# Server Name or Client/Server pair Key ## ---------------- --------------- # #portmaster.elemental.net hardlyasecret #portmaster2.elemental.net donttellanyone # ## uncomment the following line for simple testing of radlogin ## with freeradius-server # #localhost/localhost testing123 # 10.10.110.60 yourradiussecrectkey
After you have compile radcli on the system you can choose to install ocserv from the distribution repository or to compile it from source. I have use the repository from the distribution.
add following to the
ocserv.conf
auth = “radius[config=/etc/radcli/radiusclient.conf,groupconfig=true]”
Fixing some errors…
custom-header = “X-CSTP-Client-Bypass-Protocol: true”
Add your own certificate for your domain
server-cert =
server-key =
VPN Pool
ipv4-network = 10.10.100.128
ipv4-netmask = 255.255.255.128
Add route to network that you want to reach form the vpn server
route=
Here we go, I have to put these card on to my server with C620 chipset. I need fast local storage.
The card is not very expensive but i need 4×4 TB SSD on these server. So we started with a simple test. I bought 4x 16 GB Optane SSD for under 14 $ per piece. When it runs with this SSD so it will run also with the big ones.
That this runs we need a chipset that supports bifurcation with PCIe. With bifurcation you can split your 16x PCIe lane into 2×8, 4x4x4x4x or 8x4x4x. We need 4x4x4x4x every SSD needs 4x PCIe lanes. Would you read more about this topic look here https://blog.donbowman.ca/2017/10/06/pci-e-bifurcation-explained/
I have not really enough space. On this server there are also 4x 2080 TI graphics cards. It was very close that the card fit in it.
Next step was to configure the BIOS(UEFI) to talk correctly to this card. The automatic mode is not working. In automatic mode it detect only one SSD, the first one on the Asus card.
I used on the motherboard slot 10, this information is important to configured PE3 on CPU2 correct. We look in the manual of the motherboard to see which slot is connected with the CPU. Slot11 is not working there is not enough PCIe lanes only 4x. We need 16x to split into 4x4x4x4x
Let´s jump in to the BIOS.
On the BIOS follow the rabbit pictures.
We have to choose the right CPU. We use PE3 on CPU2
Change the IOU2 from Auto to 4x4x4x4, every SSD should runs with 4x PCIe lanes.
That was it.
Check under Linux with the command dmidecode -t 9
What a pain in the ass. If you use mount.cifs under Ubuntu 20.04 and now you have stale file handle to edit files and more.
We look in to the manpage from mount.cifs and there is a option to solve this issue.
noserverino Client generates inode numbers itself rather than using the actual ones from the server.
See section INODE NUMBERS for more information.
Org post see here
https://community.letsencrypt.org/t/getting-wildcard-certificates-with-certbot/56285
Reqeuirement access to dns records of the domain
wget https://dl.eff.org/certbot-auto
chmod a+x certbot-auto./certbot-auto certonly –manual -d *.domain.example -d example.domain –preferred-challenges dns-01 –server https://acme-v02.api.letsencrypt.org/directory
You will be prompted to add two txt records to your dns server, you should be able to do this.
that´s all
Ubuntu 16.04 is sometimes wtf.
How to escape this crazy error?
Preconfiguring packages …
Setting up systemd (229-4ubuntu21.1) …
Can’t locate Debian/AdduserCommon.pm in @INC (you may need to install the Debian::AdduserCommon module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.22.1 /usr/local/share/perl/5.22.1 /usr/lib/x86_64-linux-gnu/perl5/5.22 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl/5.22 /usr/share/perl/5.22 /usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base .) at /usr/sbin/addgroup line 34.
BEGIN failed–compilation aborted at /usr/sbin/addgroup line 34.
dpkg: error processing package systemd (–configure):
subprocess installed post-installation script returned error exit status 2
Errors were encountered while processing:
systemd
E: Sub-process /usr/bin/dpkg returned an error code (1)
Use apt-file search AdduserCommon.pm on another Ubuntu VM. APT-File install separately on the maschine. So you can look which packet the file belongs to.
apt-file search AdduserCommon.pm
adduser: /usr/share/perl5/Debian/AdduserCommon.pm
now we reinstall the package adduser
apt-get –reinstall install adduser
Antoher way is copy the whole folder /usr/share/perl5/ from the working vm to the broken vm.