Subido por Javier Quinto

NFV OVS Tutorial

Anuncio
OVS/NFV
Tutorial
Basics and Hands On
nov/2016
Prof. Christian Rothenberg (FEEC/UNICAMP)
PhD candidate Javier Quinto (UNICAMP & INICTEL-UNI)
Agenda
● Open vSwitch Mirror
● Linux Containers (LXC)
● Docker
● IPSec & BroIDS VNF
Hands On
Accessing to the Virtual Machines
# We are sharing two virtual machine Ubuntu 16.04 in OVA format, each
one has a static IP address.
# To start the VMs, open a terminal for each VM and type:
$ vboxmanage startvm Tutorial-SDN1
$ vboxmanage startvm Tutorial-SDN2
# Access to each one of the VMs recently started:
$ ssh [email protected] (password = tutorial)
VM1
VM2
$ ssh [email protected] (password = tutorial)
192.168.56.101
192.168.56.102
Agenda
1. Port Mirroring with OVS
1.1 – Creating an mirror interface with OVS
Mirror with Open vSwitch (OVS)
Question: What is Port Mirroring with OVS ?
This exercise describes how to configure
a mirror port on an Open vSwitch. The
goal is to install a new guest to act as
IDS/IPS system.
The guest is configured with 2 virtual
network interfaces. The first interface will
have an IP address and will be used to
manage the guest. The other interfaces
will be connected to the mirror port on
Open vSwitch.
Mirror with Open vSwitch (OVS)
1.1 Create an Interface mirror with OVS
(1/2)
# Create three network namespaces "LNS-1”,
“LNS-2” and “LNS-3”
$ sudo ip netns add LNS-1 LNS-2 LNS-3
br1
Internal ports
port1
# Create a new internal interface “port3”
$ sudo ovs-vsctl add-port br1 port3 -- set Interface
port3 type=internal
# Bind the internal interface to the corresponding
LNS
$ sudo ip link set port3 netns LNS-3
$ sudo ip netns exec LNS-3 ip link set dev port3 up
$ sudo ip netns exec LNS-3 ip addr add 11.0.0.3/24
dev port3
$ sudo ip netns exec LNS-3 ping 11.0.0.1 -c 2
port2
OVS
LNS-1
port1
LNS-2
port2
port3
LNS-3
Mirror with Open vSwitch (OVS)
1.1 Create an Interface mirror with OVS (2/2)
# Create a mirror interface
$ sudo ovs-vsctl -- set bridge br1 mirrors=@m -- --id=@port2 get port port2 -- --id=@port1
get port port1 -- --id=@port3 get port port3 -- --id=@m create mirror name=test-mirror
select-dst-port=@port1,@port2 select-src-port=@port1,@port2 output-port=@port3
# Capture packets at interface “LNS-3”
$ sudo ip netns exec LNS-3 tshark -i port3
br1
Internal ports
# Remove the mirror interface br1
ovs-vsct clear Bridge br1 mirror
port1
port2
OVS
LNS-1
port1
LNS-2
port2
port3
LNS-3
Agenda
2. Linux Containers (LXC)
2.1 – LXC Introduction
2.2 – Configuration LXC
2.3 – GRE with LXC
2.1 LXC Introduction
LXC is a userspace interface for the Linux kernel containment features.
Through a powerful API and simple tools, it lets Linux users easily create and
manage system or application containers.
Features: Current LXC uses the following kernel features to contain processes:
●
●
●
●
●
●
Kernel namespaces (ipc, uts, mount, pid, network and user)
Apparmor and SELinux profiles
Seccomp policies
Chroots (using pivot_root)
Kernel capabilities
CGroups (control groups)
LXC containers are often considered as something in the middle between a chroot
and a full fledged virtual machine. The goal of LXC is to create an environment as
close as possible to a standard Linux installation but without the need for a separate
kernel.
2.1 LXC Introduction
Architecture of Linux Containers
2.2 Configure LXC (1/5)
1.1 Accesing to the virtual machines
# To start the VMs, open a terminal for each VM and type:
From term1: $ vboxmanage startvm Tutorial-SDN1
192.168.56.101
192.168.56.102
From term2: $ vboxmanage startvm Tutorial-SDN2
# Access to each one of the VMs recently started:
VM1
VM2
From term1: $ ssh [email protected] (password = tutorial)
From term2: $ ssh [email protected] (password = tutorial)
192.168.56.1
2.2 Configure LXC (2/5)
1.1 Installing of containers LXC
# Create a LXC container for each VM using ubuntu 14.04
template. We call “LXC1” to the container of VM1 and “LXC2” to
th container of VM2
VM1: $ sudo lxc-create -t download -n LXC1 -- -d ubuntu -r trusty -a
amd64
VM1
VM2
Container
LXC1
Container
LXC2
br0
br0
192.168.56.101
192.168.56.102
VM2: $ sudo lxc-create -t download -n LXC2 -- -d ubuntu -r trusty -a
amd64
# List the containers recently created
VM1: $ sudo lxc-ls
VM2: $ sudo lxc-ls
192.168.56.1
2.2 Configure LXC (3/5)
Connecting LXC to the bridge br0 for VM1
Add the required lines of the LXC config file which are highlighted
below and comment the line that start with “ lxc.network.link”
$ sudo vim /var/lib/lxc/LXC1/config
VM1
VM2
Container
LXC1
Container
LXC2
br0
br0
...
# Container specific configuration
lxc.rootfs = /var/lib/lxc/test-ubuntu1/rootfs
lxc.utsname = LXC1
# Network configuration
lxc.network.type = veth
lxc.network.veth.pair = host1
lxc.network.script.up = /etc/lxc/ovsup
lxc.network.script.down = /etc/lxc/ovsdown
lxc.network.flags = up
lxc.network.ipv4 = 11.1.1.11/24
lxc.network.ipv4.gateway = 11.1.1.1
#lxc.network.link = lxcbr0
lxc.network.hwaddr = 00:16:3e:fb:cb:db
192.168.56.101
192.168.56.102
192.168.56.1
2.2 Configure LXC (4/5)
Connecting LXC to the bridge br0 for VM2
Add the required lines of the LXC config file which are highlighted
below and comment the line that start with “ lxc.network.link”
$ sudo vim /var/lib/lxc/LXC2/config
VM1
VM2
Container
LXC1
Container
LXC2
br0
br0
...
# Container specific configuration
lxc.rootfs = /var/lib/lxc/test-ubuntu1/rootfs
lxc.utsname = LXC2
# Network configuration
lxc.network.type = veth
lxc.network.veth.pair = host2
lxc.network.script.up = /etc/lxc/ovsup
lxc.network.script.down = /etc/lxc/ovsdown
lxc.network.flags = up
lxc.network.ipv4 = 11.1.1.12/24
lxc.network.ipv4.gateway = 11.1.1.1
#lxc.network.link = lxcbr0
lxc.network.hwaddr = 00:16:3e:fb:cb:dc
192.168.56.101
192.168.56.102
192.168.56.1
2.2 Configure LXC (5/5)
Script for the Container-1 and Container-2
Create/delete interfaces of containers
VM1
There is a script named “ovsup” that allows to create interfaces
inside br0
$ sudo vim /etc/lxc/ovsup
#!/bin/bash
BRIDGE="br0"
ovs-vsctl --may-exist add-br $BRIDGE
ovs-vsctl --may-exist add-port $BRIDGE $5
VM2
Container
LXC1
Container
LXC2
br0
br0
192.168.56.101
192.168.56.102
There is another script named “ovsdown” that allows to delete
interfaces created by OVS when the Linux Containers was up
$ sudo vim /etc/lxc/ovsdown
#!/bin/bash
BRIDGE="br0"
ovs-vsctl del-port $BRIDGE host1
192.168.56.1
2.3 Tunnel GRE with LXC (1/3)
# Create the interface bridge br0 for each KVM
VM1
VM1: sudo ovs-vsctl add-br br0
VM2: sudo ovs-vsctl add-br br0
VM2
Container
LXC1
Container
LXC2
br0
br0
# Configure the IP address for the interface br0 in each KVM
192.168.56.101
192.168.56.102
VM1: sudo ifconfig br0 11.1.1.1 netmask 255.255.255.0
VM2: sudo ifconfig br0 11.1.1.2 netmask 255.255.255.0
# Create a GRE tunnel between the bridges br0
VM1: sudo ovs-vsctl add-port br0 gre1 -- set Interface gre1 type=gre
options:remote_ip=192.168.56.102
VM2: sudo ovs-vsctl add-port br0 gre1 -- set Interface gre1 type=gre
options:remote_ip=192.168.56.101
192.168.56.1
2.3 Tunnel GRE with LXC (2/3)
# Start each container from different console
VM1: sudo lxc-start -n LXC1
VM2: sudo lxc-start -n LXC2
# Access to each VM using the parameter “lxc-attach”
VM1: sudo lxc-attach -n LXC1
VM1
VM2
Container
LXC1
Container
LXC2
br0
br0
192.168.56.101
192.168.56.102
VM2: sudo lxc-attack -n LXC2
# Testing the connectivity between the containers
- From the container LXC1 (IP=11.1.1.11)
$ ping 11.1.1.12
64 bytes from 11.1.1.12: icmp_seq=90 ttl=64 time=5.55 ms
- From the container LXC2 (IP=11.1.1.12)
$ ping 11.1.1.11
64 bytes from 11.1.1.11: icmp_seq=90 ttl=64 time=5.55 ms
192.168.56.1
2.3 Tunnel GRE with LXC (3/3)
# Use “iperf” in each virtual machine VM1, VM2. However, as
it is not installed, you may copy the binary “iperf” from VM1
to Container LXC1 and from VM2 to LXC2 respectively
VM1: sudo cp /usr/bin/iperf /var/lib/lxc/LXC1/rootfs/usr/bin/
VM2: sudo cp /usr/bin/iperf /var/lib/lxc/LXC2/rootfs/usr/bin/
VM2
VM1
Container
LXC1
br0
192.168.56.101
Container
LXC2
Tunel
GRE
br0
192.168.56.102
# Verify the RTT using IPERF UDP
# From the Container LXC2 (11.1.1.12), launch Iperf Server
listening on TCP port 5001
$ sudo iperf -s -u -b [#bandwith]
# From the another Container LXC1 (11.1.1.11), launch Iperf
Client connecting to 11.1.1.12, TCP port 5001
$ sudo iperf -c 11.1.1.12 -u
Question: What is the maximum value for Bandwith
and Transfer?
192.168.56.1
Agenda
3. Dockers
3.1 – Introduction
3.2 - Installation and Configuration
3.2 – Docker with GRE Tunnel
3.3 – Docker with Open Vswitch and GRE
Tunnel
3.1 INTRODUCTION
Docker is an open platform for
developers and sysadmins to build,
ship, and run distributed applications.
Consisting of Docker Engine, a
portable, lightweight runtime and
packaging tool, and Docker Hub, a
cloud service for sharing applications
and automating workflows, Docker
enables apps to be quickly assembled
from components and eliminates the
friction between development, QA,
and production environments
3.2 Docker Installation (done!)
4.1 – Installation Guide (1/2)
# Install and configure docker (v1.01) from the oficial repository Ubuntu
sudo apt-get update && apt-get install docker.io
sudo ln -sf /usr/bin/docker.io /usr/local/bin/docker
sudo sed -i '$acomplete -F _docker docker' /etc/bash_completion.d/docker.io
# Install the latest version of docker (v1.2)
Add the key public in where is found Docker
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys\
> 36A1D7869245C8950F966E92D8576A8BA88D21E9
Copy the docker’s site in the Ubuntu repository’s source
$ sudo sh -c "echo deb https://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
Update the Kernel
$ sudo apt-get update
Install the latest version of docker
$ sudo apt-get install lxc-docker
3.2 Docker Installation (done!)
4.1 – Installation Guide (2/2)
# What do you observe when you give below command:
$ docker -h ?
# Allowing non-root access
$ sudo gpasswd -a <user-actual> docker
$ logout
# Access again to the virtual machine and give the below command again:
$ docker -h
# Enabling the memory and swap accounting
$ sudo vim /etc/default/grub
GRUB_CMDLINE_LINUX=""
Replace by:
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
# Update the grub and restart the machine
$ sudo update-grub2 && reboot
# Installing LXC and bridge-utils
$ sudo apt-get install lxc bridge-utils
3.2 Docker
# For this exercise we will continue using the same VMs (VM1, VM2), (see slide
23)
# Check if the docker is up
$ sudo ps aux |grep docker
# If docker is not up then give the below command:
$ sudo service docker start
# In each VM, search and pull the pre-configured container from docker
hub
VM1: $ sudo docker search intrig/tutorial
VM2: $ sudo docker search intrig/tutorial
VM1: $ sudo docker pull intrig/tutorial:v1
VM2: $ sudo docker pull intrig/tutorial:v1
# Check if docker was correctly downloaded
VM1: $ docker images
VM2: $ docker images
3.3 Docker with GRE Tunnel
Create GRE Tunnel in Docker (1/2)
# Virtual Machine 1 (VM1)
Configuring the virtual network “net1” from VM1
$ sudo docker network create --subnet=10.1.1.0/24
--gateway=10.1.1.17 --ip-range=10.1.1.16/28 -o
“com.docker.network.bridge.name”=”docker-net1” net1
net1
10.1.1.16/28
Gre
Tunnel
OVS
br0
$ sudo ovs-vsctl add-br br0
$ sudo ip link set dev br0 up
$ sudo ovs-vsctl add-port br0 gre0 -- set interface gre0
type=gre options:remote_ip=192.168.56.102
$ sudo brctl addif docker-net1 br0
192.168.56.101
Master VirtualBox
3.3 Docker with GRE Tunnel
Create GRE Tunnel in Docker (2/2)
# Virtual Machine 2 (VM2)
Configuring the virtual network “net1” from VM2
$ sudo docker network create --subnet=10.1.1.0/24
--gateway=10.1.1.33 --ip-range=10.1.1.32/28 -o
“com.docker.network.bridge.name”=”docker-net1” net1
Gre
Tunnel
net1
10.1.1.32/28
OVS
br0
$ sudo ovs-vsctl add-br br0
$ sudo ip link set dev br0 up
$ sudo ovs-vsctl add-port br0 gre0 -- set interface gre0
type=gre options:remote_ip=192.168.56.101
$ sudo brctl addif docker-net1 br0
192.168.56.102
Master VirtualBox
3.3 Docker with GRE Tunnel
Docker Network Configuration in VM1 (2/2)
# Virtual Machine 1 (VM1)
# Activate docker for the container 1
$ sudo docker run --net=net1 -it --privileged --name=container1
--hostname=container1 --publish 127.0.0.1:2222:22 intrig/tutorial:v1 /bin/bash
net1
10.1.1.16/28
C1
net1
10.1.1.32/28
Gre
Tunnel
C2
OVS
OVS
br0
br0
192.168.56.101
192.168.56.102
Master VirtualBox
3.3 Docker with GRE Tunnel
Docker Network Configuration in VM1 (2/2)
# Virtual Machine 2 (VM2)
# Activate docker for the container 2
$ sudo docker run --net=net1 -it --privileged --name=container2
--hostname=container2 --publish 127.0.0.1:2222:22 intrig/tutorial:v1 /bin/bash
net1
10.1.1.16/28
net1
10.1.1.32/28
Gre
Tunnel
C1
C2
OVS
OVS
br0
br0
192.168.56.101
192.168.56.102
Master VirtualBox
Master VirtualBox
3.3 Docker with GRE Tunnel
# Testing the connectivity between dockers
From Container1
ping 10.1.1.32
From Container2
ping 10.1.1.16
# Copy the binary “iperf” from KVM to each Container.
VM1: sudo cp /usr/bin/iperf /var/lib/docker/aufs/diff/<ID-docker1>/bin/
VM2: sudo cp /usr/bin/iperf /var/lib/docker/aufs/diff/<ID-docker2>/bin/
How do you know the value of <ID-dockerX> from containerX?
3.3 Docker with GRE Tunnel
# Verify the RTT using IPERF UDP
# From the Container1, executes Iperf Server listening on TCP port 5001
$ sudo iperf -s -u
# From the another container2, executes Iperf Client connecting to Container1
(10.1.1.16), TCP port 5001
$ sudo iperf -c 10.1.1.16 -u -b <#bandwith>
net1
10.1.1.16/28
What can you say of the “Bandwith” ?
What is the maximum value for Bandwith
and Transfer?
net1
10.1.1.32/28
Gre
Tunnel
C1
C2
OVS
OVS
br0
br0
192.168.56.101
192.168.56.102
Master VirtualBox
Master VirtualBox
C1
C2
C3
C4
eth0
eth0
eth0
eth0
Docker
1.12.3
Docker
1.12.3
3.4 Docker with OVS an GRE Tunnel
br0
tep0
ihost0
br2
ihost1
gre0
enp0s8
192.168.56.101
VLAN10
ihost0
ihost1
gre0
br2
br0
tep0
OVS
2.5.0
OVS
2.5.0
VLAN20
enp0s8
192.168.56.102
3.4 Docker with OVS an GRE
# Virtual Machine 1 (VM1)
$ sudo ovs-vsctl del-br br0
$ sudo ovs-vsctl add-br br0
$ sudo ovs-vsctl add-br br2
$ sudo ovs-vsctl add-port br0 tep0 -- set interface tep0 type=internal
$ sudo ifconfig tep0 192.168.200.21 netmask 255.255.255.0
$ sudo ovs-vsctl add-port br2 gre0 -- set interface gre0 type=gre
options:remote_ip=192.168.56.102
# Virtual Machine 2 (VM2)
$ sudo ovs-vsctl del-br br0
$ sudo ovs-vsctl add-br br0
$ sudo ovs-vsctl add-br br2
$ sudo ovs-vsctl add-port br0 tep0 -- set interface tep0 type=internal
$ sudo ifconfig tep0 192.168.200.22 netmask 255.255.255.0
$ sudo ovs-vsctl add-port br2 gre0 -- set interface gre0 type=gre
options:remote_ip=192.168.56.101
3.4 Docker with OVS an GRE
Starting Containers
# Virtual Machine 1 (VM1)
# Delete the container docker created in the last exercise
$ docker stop container1
$ docker rm container1
# Create two containers docker and set the network mode to none
$ C1=$(docker run -d --net=none -t -i --name=container1 intrig/tutorial:v1 /bin/bash)
$ C2=$(docker run -d --net=none -t -i --name=container2 intrig/tutorial:v1 /bin/bash)
# Virtual Machine 2 (VM2)
# Delete the container docker created in the last exercise
$ docker stop container2
$ docker rm container2
# Create two containers docker and set the network mode to none
$ C3=$(docker run -d --net=none -t -i --name=container3 intrig/tutorial:v1 /bin/bash)
$ C4=$(docker run -d --net=none -t -i --name=container4 intrig/tutorial:v1 /bin/bash)
3.4 Docker with OVS an GRE
Binding docker with Open Vswitch Interface (1/2)
# Bind dockers with Open vSwitch interface
Pipework syntax:
./pipework <bridge-name> -i <docker-interface> -l <ovs-interface> $<variable>
<IP>/<mask>@<GW> @<vlan-number>
For VM1:
$ cd /home/tutorial/pipework/
$ sudo ./pipework br2 -i eth0 -l ihost0 $C1 1.0.0.1/[email protected] @10
$ sudo ./pipework br2 -i eth0 -l ihost1 $C2 1.0.0.2/[email protected] @20
For VM2:
$ cd /home/tutorial/pipework/
$ sudo ./pipework br2 -i eth0 -l ihost0 $C3 1.0.0.3/[email protected] @10
$ sudo ./pipework br2 -i eth0 -l ihost1 $C4 1.0.0.4/[email protected] @20
3.4 Docker with OVS an GRE
# Using different terminals, start the containers
VM1 From terminal 1: $ docker start -a -i container1
VM1 From terminal 2: $ docker start -a -i container2
VM3 From terminal 1: $ docker start -a -i container3
VM4 From terminal 2: $ docker start -a -i container4
# From the Container1 (Terminal 1)
Container1$ ping 1.0.0.3 -c 2
Container1$ ping 1.0.0.4 -c 2
# From the Container3 (Terminal 3)
Container3$ ping 1.0.0.1 -c 2
Container3$ ping 1.0.0.2 -c 2
What ping is successful? And why?
3.4 Docker with OVS an GRE
Testing GRE Tunnel
# Verify the RTT using IPERF
# From the Container #1 1.0.0.3 launch Iperf Server listening on TCP port 5001
$ sudo iperf -s -u
# From the another Container #3, launch Iperf Client connecting to 1.0.0.1, TCP port 5001
$ sudo iperf -c 1.0.0.1 -u
What can you say of the “Bandwith” ?
What about IPerf TCP?
# Virtual Machine 1 (KVM-1)
$ sudo ovs-vsctl show
$ sudo ovs-ofctl show br2
$ sudo ovs-ofctl dump-flows br2
# Virtual Machine 2 (KVM-2)
$ sudo ovs-vsctl show
$ sudo ovs-ofctl show br2
$ sudo ovs-ofctl dump-flows br2
Agenda
4. IPSec & BroIDS
4.1 – Bro IDS Introduction
4.2 – Docker with IPsec Tunnel and Bro IDS
4.3 – IPSec Configuration on OVS
4.4 - Docker Configuration
4.5 - Configuration IPSec Gateway
4.6 - Bro Minimal Configuration
4.7 - Exercises
4.1 Bro Introduction
Bro is a passive, open-source network traffic analyzer. It is
primarily a security monitor that inspects all traffic on a link in depth
for signs of suspicious activity.
More generally, however, Bro supports a wide range of traffic
analysis tasks even outside of the security domain, including
performance measurements and helping with troubleshooting.
Some interesting features are:
-
Deployment
Analysis
Scripting Language
Interfacing
4.1 Bro Architecture
Bro is composed in two components. Its event engine (or core) reduces
the incoming packet stream into a series of higher-level events. These
events reflect network activity in policy-neutral terms, i.e., they describe
what has been seen, but not why, or whether it is significant. For
example, every HTTP request on the wire turns into a corresponding
http_request event that carries with it the involved IP addresses and
ports, the URI being requested, and the HTTP version in use. The event
however does not convey any further interpretation, e.g., of whether that
URI corresponds to a known malware site.
The script interpreter, which executes a set of event handlers written in
Bro’s custom scripting language. These scripts can express a site’s
security policy, i.e., what actions to take when the monitor detects
different types of activity. More generally they can derive any desired
properties and statistics from the input traffic. Bro’s language comes
with extensive domain-specific types and support functionality; and,
crucially, allows scripts to maintain state over time, enabling them to
track and correlate the evolution of what they observe across
connection and host boundaries. Bro scripts can generate real-time
alerts and also execute arbitrary external programs on demand, e.g., to
trigger an active response to an attack.
Docker with IPSec Tunnel and Bro IDS
Internet
VM-1
VM-2
GW:
10.0.2.15/10.1.1.1
enp0s8
Host1
Attacker
Host2
Parameters of Configuration:
Host1: 10.1.1.2/24
Host2: 10.1.1.3/24
Host3: 10.1.1.4/24
Host4: 10.1.1.5/24
VM1: emp0s8=192.168.56.101
VM2: emp0s8=192.168.56.102
enp0s8
IPSec
Host3
Host4
SSH Server
4.3 IPSec Configuration on OVS
# Virtual Machine 1 (VM1)
$ sudo docker network rm net1
$ sudo ovs-vsctl add-br br2
$ sudo ip link set dev br2 up
# Create the ipsec interface “ipsec1” within br2 bridge
$ sudo ovs-vsctl add-port br2 ipsec1 -- set interface ipsec1 type=ipsec_gre
options:remote_ip=192.168.56.102 options:psk=<enter-password>
# Virtual Machine 2 (VM2)
$ docker network rm net1
$ sudo ovs-vsctl add-br br2
$ sudo ip link set dev br2 up
# Create the ipsec interface “ipsec1” within br2 bridge
$ sudo ovs-vsctl add-port br2 ipsec1 -- set interface ipsec1 type=ipsec_gre
options:remote_ip=192.168.56.101 options:psk=<enter-password>
4.4 Docker Configuration (1/2)
Starting Containers
# Virtual Machine 1 (VM1)
# Delete the container docker created in the last exercise
# Create two containers docker and set the network mode to none
$ C1=$(docker run -d --net=none -t -i --name=container1 --hostname=container1
richardqa/ubuntu16.04:v6 /bin/bash)
$ C2=$(docker run -d --net=none -t -i --name=container2 --hostname=container2
richardqa/ubuntu16.04:v6 /bin/bash)
# Virtual Machine 2 (VM2)
# Delete the container docker created in the last exercise
# Create two containers docker and set the network mode to none
$ C3=$(docker run -d --net=none -t -i --name=container3 --hostname=container3
richardqa/ubuntu16.04:v6 /bin/bash)
$ C4=$(docker run -d --net=none -t -i --name=container4 --hostname=container4
richardqa/ubuntu16.04:v6 /bin/bash)
4.4 Docker Configuration (2/2)
# Bind dockers with Open vSwitch interfaces
For VM1:
$ cd /home/tutorial/pipework/
$ sudo ./pipework br2 -i eth0 -l ihost0 $C1 10.1.1.2/[email protected]
$ sudo ./pipework br2 -i eth0 -l ihost1 $C2 10.1.1.3/[email protected]
For VM2:
$ cd /home/tutorial/pipework/
$ sudo ./pipework br2 -i eth0 -l ihost0 $C3 10.1.1.4/[email protected]
$ sudo ./pipework br2 -i eth0 -l ihost1 $C4 10.1.1.5/[email protected]
4.4 Configuring IPSec Gateway
Note that each VM has one interface “enp0s3” to connect to the
Internet with IP: 10.0.2.15.
How to create one route for each container reaching the Internet?
Steps:
1- Configure IP address for br2 in VM1 (br2=10.1.1.1/24)
2- Configure a rule in IPTABLES that allows each container using the
IPSec channel to reach the Internet.
(Use -A POSTROUTING and -j MASQUERADE)
Q1: Write the IPTABLES rule
3- Check the connection of each Containers to the Internet
4.5 Mirror configuration to IPsec
channel
a) Create a mirror interface to forward each packet within
IPsec channel to the br2 local interface from VM2.
ovs-vsctl -- set bridge br2 mirrors=@m -- --id= … ?
Q2: Write the complete command to mirror packets
b) Check if the “br2” interface is receiving packets from the
IPSEC tunnel
In VM1, docker exec container1 ping -c 2 10.1.1.4
In VM2, sudo tshark -i br2
Q3: Do you see any packets?
4.6 Bro: Minimal Starting Configuration (1/3)
These are the basic configuration changes for a minimal
BroControl installation that will manage a single Bro instance on
the localhost:
1. In $PREFIX/etc/node.cfg, set the interface to monitor.
2. In $PREFIX/etc/networks.cfg, comment out the default
settings and add the networks that Bro will consider local to
the monitored environment.
3. In $PREFIX/etc/broctl.cfg, change the MailTo email address
to a desired recipient and the LogRotationInterval to a
desired log archival frequency.
4.6 Bro: Minimal Starting Configuration (2/3)
Now start the BroControl shell like: broctl
Since this is the first-time use of the shell, perform an initial installation of the
BroControl configuration:
[BroControl] > install
Then start up a Bro instance:
[BroControl] > start
Check if Bro is running
[BroControl] > status
* If there are errors while trying to start the Bro instance, you can can
view the details with the diag command. If started successfully, the Bro
instance will begin analyzing traffic according to a default policy and
output the results in $PREFIX/logs.
4.6 Bro: Minimal Starting Configuration (3/3)
a) Create a new folder “test-bro” and run bro within this folder
indicating the interface to be monitored.
VM2$ mkdir ~/test-bro/
VM2$ sudo -s
# password: tutorial
VM2# bro -i br2 local
b) Simple test to alert a possible threat from a external host
VM1$ docker start -ai container1
Container1: $ nmap 10.1.1.4
# against container4
VM2$ ls -ls ~/test-bro/
VM2$ conn.log loaded_scripts.log notice.log packet_filter.log stats.log
c) Check “notice.log” and Q4: explain the fields shown in that file.
4.7 Bro: Detecting Brute-force attacks
Bro has a script to detect Brute-Force attacks.
Check the file:
$PREFIX/share/bro/policy/protocols/ssh/detect-bruteforcing.bro
module SSH;
export {
redef enum Notice::Type += {
Password_Guessing,
Login_By_Password_Guesser,
};
redef enum Intel::Where += {
SSH::SUCCESSFUL_LOGIN,
};
const password_guesses_limit: double = 30 &redef;
const guessing_timeout = 30 mins &redef;
const ignore_guessers: table[subnet] of subnet &redef;
}
event bro_init()
{
local r1: SumStats::Reducer = [$stream="ssh.login.failure", $apply=set(SumStats::SUM, SumStats::SAMPLE),
$num_samples=5];
SumStats::create([$name="detect-ssh-bruteforcing",
$epoch=guessing_timeout,
4.7 Bro: Detecting Brute-force attacks
$reducers=set(r1),
$threshold_val(key: SumStats::Key, result: SumStats::Result) =
{
return result["ssh.login.failure"]$sum;
},
$threshold=password_guesses_limit,
$threshold_crossed(key: SumStats::Key, result: SumStats::Result) =
{
local r = result["ssh.login.failure"];
local sub_msg = fmt("Sampled servers: ");
local samples = r$samples;
for ( i in samples )
{
if ( samples[i]?$str )
sub_msg = fmt("%s%s %s", sub_msg, i==0 ? "":",", samples[i]$str);
}
# Generate the notice.
NOTICE([$note=Password_Guessing,
$msg=fmt("%s appears to be guessing SSH passwords (seen in %d connections).",
key$host, r$num),
$sub=sub_msg,
$src=key$host,
$identifier=cat(key$host)]);
}]);
}
4.7 Bro: Detecting Brute-force attacks
event ssh_auth_successful(c: connection, auth_method_none: bool)
{
local id = c$id;
Intel::seen([$host=id$orig_h,
$conn=c,
$where=SSH::SUCCESSFUL_LOGIN]);
}
event ssh_auth_failed(c: connection)
{
local id = c$id;
# Add data to the FAILED_LOGIN metric unless this connection should
# be ignored.
if ( ! (id$orig_h in ignore_guessers &&
id$resp_h in ignore_guessers[id$orig_h]) )
SumStats::observe("ssh.login.failure", [$host=id$orig_h], [$str=cat(id$resp_h)]);
}
4.8 Configuring alerts in Bro
We show an example of an alert against Brute-force attacks, configured in Bro
IDS.
Create the file brute-force.bro and write below the following:
@load protocols/ssh/detect-bruteforcing
redef Site::local_nets = {
10.1.1.0/24,
};
redef ignore_checksums = T;
redef Notice::mail_dest = "<[email protected]>";
redef Notice::emailed_types += { SSH::Password_Guessing, };
redef SSH::password_guesses_limit=10;
hook Notice::policy(n: Notice::Info)
{
if ( n$note == SSH::Password_Guessing && /10\.1\.1\.5/ in n$sub )
add n$actions[Notice::ACTION_EMAIL];
}
4.9 Simulating an attack of Brute-force
Now execute again the Bro program from the directory ~/test-bro in VM2.
Remember execute bro from Super User
VM2~/test-bro# bro -i br2 <script>.bro
From container1, execute the program “hydra” to generate massive tries to
authenticate against a SSH server located in the container4
Before executing Hydra, create two files, “users.txt” and “pass.txt”, they should
contain a list of possible users and passwords to be used by Hydra program.
container1$ hydra -L users.txt -P pass.txt 10.1.1.5 ssh
After few seconds, Bro IDS should be able to detect such attack. Check the
directory “~/test-bro” and take a look the file “notice.log”
Q5: What can you say about the contents of notice.log?
Take a look the file $PREFIX/share/bro/policy/protocols/ssh/detect-bruteforcing.bro
Q6: Which parameters can you change to modify the behavior of the alerts against
brute-force attacks?
Descargar