Upgrading an Elasticsearch cluster from 2.x to 5.0.0

Sisyphe sister project is running an Elasticsearch cluster. We wrote two Briks to help us manage this cluster: client::elasticsearch and client::elasticsearch::query. The first one as some raw mappings to the Elasticsearch API and is mainly using the Search::Elasticsearch Perl module. The second serves as a quick way to perform basic searches on your indexes. In this post, we will describe how to use these modules to perform an Elasticsearch major version upgrade.

Requirements

You will of course have to install Metabrik on a host with access to all nodes of your cluster. Once done, you have to load and configure required Briks:

use brik::tool
run brik::tool install client::elasticsearch
run brik::tool install client::elasticsearch::query

Once installed, it is not required to call these Commands anymore (except to update dependencies). You should do it now, in case you have an older version of Search::Elasticsearch Perl module. Without the version 5.x, this procedure will fail.

use client::elasticsearch
use client::elasticsearch::query
my $nodes = [ 'http://node1:9200', 'http://node2:9200', 'http://node3:9200' ]
set client::elasticsearch nodes $nodes
set client::elasticsearch::query nodes $nodes
run client::elasticsearch open

You can put these lines in your ~/.metabrik_rc file and start metabrik.

Backup all your indices

Performing such a major upgrade is a risky thing. We urge you to refrain skipping this step. To make things easy, we developed an export_to_csv Command. It is as simple as executing it for all your indices like:

run client::elasticsearch list_indices
for (@$RUN) { $CON->run('client::elasticsearch', 'export_as_csv', $_) }

All your indices will be saved in the current directory as CSV files. One per combination of index and type.

First steps with client::elasticsearch Brik

You have plenty of Commands available in this Brik. We will not describe all of them here, but you can start by typing help at the prompt and try the info Command:

help client::elasticsearch
run client::elasticsearch::info

client-elasticsearch-info

You may also try the get_cluster_health Command or the list_indices one:

run client::elasticsearch get_cluster_health
run client::elasticsearch list_indices

get-cluster-health

Now you got the picture, try playing with some other Commands like the www_search Command:

run client::elasticsearch www_search * www-2016-02-06

search-www

Time to upgrade your cluster

Elasticsearch 5.0.0 is out, and we wanted to give a try. We had to update the client::elasticsearch Brik to make it compatible with this new version, but more importantly, we had to upgrade our cluster from 2.4.x to 5.0.0. Here is the procedure you may apply, it is based on the official document for a cluster upgrade.

Stop your indexation tasks

Of course, we will have to stop all of your ES (Elasticsearch) instances, so the first step is to stop all of your indexation tasks. We are mainly using logstash, so we have to stop these processes on all of instances. The specific command to run on your server depends on the operating system. For us, it is a matter of shutting it down like:

sudo service logstash stop

Once done, you have to disable shard allocation as described in the documentation. There is a Command for that, and another one to verify it has been applied:

run client::elasticsearch flush_synced
run client::elasticsearch disable_shard_allocation
run client::elasticsearch get_cluster_settings

get-cluster-settings

Then perform another synced flush to make recovery faster after cluster restart. It may take some time.

run client::elasticsearch flush_synced

Backup required indices

You will have to backup your indices. You should have already done it by step one. Some of them were probably created with an older version of ES (like before 2.0.0), backups will be used to restore them in the new index format for ES 5.0.0. After the backup is done, you must delete old indices so they will not interfere with the start of ES 5.0.0 process. Our upgrade process failed at first time because of those old indices. So we had to export them from the ES 2.x cluster and import them back after the ES 5.x upgrade. Typical error message is:

"The index [[index-2016-02-03/tIhwAIL3R6G4zTUG6ucf6g]] was created before v2.0.0.beta1. It should be reindexed in Elasticsearch 2.x before upgrading to 5.0.0."
run client::elasticsearch list_indices_version *

list-indices-version

If you have older indices than version “2020199” (it means 2.2.1), you should consider reindexing instead (see previously mentioned CSV import/export method). If backup task completed successfully, you can now safely delete your backed-up indices:

run client::elasticsearch delete_index index-1,index-2,other-*

Alternative backup option with snapshoting

You may also consider using the snapshoting feature available with ES to backup your indices. This function will unfortunately not upgrade them, and you will have to export/import some of them as CSV for reindexing. But for future management of your indices, it is a feature worth to know.

To take advantage of it, your first have to have a shared filesystem available for all your nodes. We did configure a shared NFS server for that, and updated elasticsearch.yml configuration file to add the following shared path:

path.repo: ["/nfs/backup"]

Create the snapshot repository and backup

One NFS is setup and running and all nodes can read/write to it, you have to create a snapshot repository:

run client::elasticsearch create_shared_fs_snapshot_repository /nfs/backup/es

Verify it has worked:

run client::elasticsearch get_snapshot_repositories

Then perform the backup as either a full backup or as a selected backup of specific indices:

run client::elasticsearch create_snapshot
run client::elasticsearch create_snapshot_for_indices "[ qw(index1 index2 other-*) ]"

Wait for it to be done and look at its progress with:

do { $RUN = ! $CON->run('client::elasticsearch', 'is_snapshot_finished'); print "Is running: $RUN\n"; sleep(5) } while ($RUN)

Alternatively, look at its status:

run client::elasticsearch get_snapshot_status

Restore snapshoted indices

Later on, if you want to restore indices:

run client::elasticsearch restore_snapshot snapshot repository

Note: you may still be unable to restore ancient indexes. If you have to restore only specific indices, you can do it by using the restore_snapshot_for_indices Command:

run client::elasticsearch restore_snapshot_for_indices "[ qw(index-2016-05-*) ]" type repository

And to see progress:

do { $RUN = $CON->run('client::elasticsearch', 'count_yellow_shards'); print "Remaining: $RUN\n"; sleep(60) } while ($RUN)

Shutdown and upgrade all nodes

Now stop all your elasticsearch processes.

sudo service elasticsearch stop

The software upgrade process depends on your operating system, we will not describe it here. You also have to consider upgrading any installed plugins. After software upgrade, you will have to change some configuration directives which are either new or obsolete. For instance, we had to remove:

index.number_of_replicas
discovery.zen.ping.multicast.enabled
path.work
path.plugins

And we had to create a new directory:

mkdir /usr/local/etc/elasticsearch/scripts

Other breaking changes list can be found here. It is also safe to set the minimum master nodes parameter value as described here:

discovery.zen.minimum_master_nodes: 2 as described

Time to start and pray

Before restarting, we rename the old log file so we can see easily the new process starting up and potential errors. We restart all our nodes and pray for a good and fast recovery.

Note: for FreeBSD, we had to modify the rc.d script to enforce Java heap sizes:

ES_JAVA_OPTS="-Xms8g -Xmx8g"
export ES_JAVA_OPTS

Typical error message is:

[2016-11-13T07:43:25,786][ERROR][o.e.b.Bootstrap ] [node1] node validation exception
 bootstrap checks failed
 initial heap size [536870912] not equal to maximum heap size [8558477312]; this can cause resize pauses and prevents mlockall from locking the entire heap

Restore all indices and reenable indexation

You upgrade should be completed now. You have restarted your elasticsearch processes on all your nodes with:

sudo service elasticsearch start
sudo service logstash start

You can renable the shard allocation and import your saved CSV backups:

run client::elasticsearch enable_shard_allocation
run shell::command capture ls *.csv
for (@$RUN) { $CON->run('client::elasticsearch', 'import_from_csv', $_) }

Upgrade of Logstash and Kibana

Finally, you have to upgrade Logstash and Kibana to version 5.0.0. Hopefully, it perfectly worked for us. We hope your upgrade will go smoothly thanks to this guide, please let us know of any success or failure at doing so 🙂

 

Default logins and passwords used by Mirai botnets

Lots of attention has been raised on the Mirai botnets. Especially since its source code has been published on Github. Because we just wanted to know which login/password combination was used to login to remote telnet services, we extracted that information and created a password::mirai Brik so we could easily play with it.

You have two main usages, the first one is to return a Variable usable within Metabrik so you can use it other Briks (even though there is currently no client::telnet Brik). The second one is simply to save login/password combinations to a single output file, like CSV or plain login:pass couples.

EDIT: there is now a beginning of a client::telnet Brik.

Here is the usage:

use password::mirai
help password::mirai
run password::mirai telnet
run password::mirai save_as_csv output.csv
run password::mirai save_as_couple output.couple

Enjoy!

mirai-passwords

Creating your own Briks or modifying existing ones

Starting from Metabrik 1.24, you have an easy way to create and use your own Briks. You can even modify existing ones and they will have precedence over the system ones. Let’s dig into that awesome features in this post.

Update to latest version

As always, you should be running the latest version. To update, it is as simple as running in the Shell:

use brik::tool
run brik::tool update

You should even put “use brik::tool” in your $HOME .metabrik_rc file so it is loaded at every start. Do the same for brik::search Brik, you will see how useful it can be.

After the update, you can either restart Metabrik or use the “reuse” Command:

reuse

You should also add an alias for your prefered editor. Starting from Metabrik 1.23 version, the default is to capture external programs output, it won’t really work with editors like vi. In the end, you should have some lines like below in your .metabrik_rc file:

use brik::tool
use brik::search
alias vi "run shell::command system vi"
alias search "run brik::search"

Creating a skeleton of a new Brik

The brik::tool Brik has many useful Commands. One is used to create a skeleton of a new Brik for you. Try help to see others:

help brik::tool
[+] set brik::tool datadir <datadir>
[+] set brik::tool repository <Repository>
[+] run brik::tool clone <Brik> [ <Repository> ]
[+] run brik::tool create_brik <Brik> [ <Repository> ]
[+] run brik::tool create_tool <filename.pl> [ <Repository> ]
[+] run brik::tool get_brik_hierarchy <Brik>
[+] run brik::tool get_brik_hierarchy_recursive <Brik>
[+] run brik::tool get_brik_module_file <Brik> [ <directory_list> ]
[+] run brik::tool get_need_packages [ <Brik> ]
[+] run brik::tool get_need_packages_recursive <Brik>
[+] run brik::tool get_require_briks [ <Brik> ]
[+] run brik::tool get_require_briks_recursive [ <Brik> ]
[+] run brik::tool get_require_modules [ <Brik> ]
[+] run brik::tool get_require_modules_recursive <Brik>
[+] run brik::tool install <Brik>
[+] run brik::tool install_all_need_packages
[+] run brik::tool install_all_require_modules
[+] run brik::tool install_needed_packages <Brik>
[+] run brik::tool install_required_modules <Brik>
[+] run brik::tool test_repository
[+] run brik::tool update
[+] run brik::tool update_core
[+] run brik::tool update_repository
[+] run brik::tool view_brik_source <Brik>

Let’s create our first Brik named my::first and start editing it like:

run brik::tool create_brik my::first
"/home/gomor/metabrik/repository/lib/Metabrik/My/First.pm"
vi $RUN

The first thing you may notice is that the create_brik Command has created a new .pm file and the associated full hierarchy to access the file. We used a run Command, so $RUN Variable is set and we can use it to directly edit the new Brik.

We created an alias for the vi external command, so it worked like a charm here. If you had an error, please go back and read on how to add the corresponding alias in your .metabrik_rc file 🙂

We will not dive into how to actually write the code for a working Brik here. You have many examples already online on the trac server. But creating the path and the file is actually not enough to be able to use your Brik yet.

Another useful Command is view_brik_source. You can easily see source code for any Brik by using this Command. Example:

run brik::tool view_source_code core::context

But well, core::context Brik contains all the magic behind Metabrik and is by far the most difficult to read for non-Perl programmers (and maybe even Perl ones?).

Making the Brik useable

The file is created and you want to use it. If you do it now, you will have an error. Well, in fact, you will be able to load it but the system cannot find it automatically yet. So you have to update your running context like:

run core::context update_available

This Command will go through all directories containing potential Briks to make them accessible to other Briks. You can then search for yours by Tag, for instance:

search tag my
[+] Used:
[+] Not used:
[+]    my::first [first, my, unstable, used]
1

And finally use it and ask for help on how to use it:

use my::first
help my::first
[+] set my::first datadir  
[+] run my::first install

As you can see, we can search for existing Briks by Tag. A Tag is created based on the Brik name and other properties like is it used or not. You can manually add Tags to your Briks by editing the Tag Property.

And what about modifying an existing Brik?

Ah yes. First thing is to find where the Brik you want to modify is stored on your system. It is usually somewhere in /usr. In our example, we want to modify system::os Brik to add support for a new operating system:

find /usr -name Os.pm
["/usr/local/share/perl/5.22.1/Metabrik/System/Os.pm"]
my $os = $RUN->[0]
get core::global repository
my $local = $GET

We found our Brik and saved it on a Variable. To copy it to the correct Repository, we have to create the subdirectories in the good way. The local Repository is the good place to do that, so we fetch it by using the core::global Brik and save its path to $local Variable. We did also saved the path to original Brik into the $os Variable.

The name of the directory has to follow the Brik name, and be put in a parent lib/Metabrik directory. So, the Brik named system::os will end up in the lib/Metabrik/System directory. You will have to mix Perl code with Metabrik Commands to do so. Then, copy the file and change its permission:

my $dir = local.'/lib/Metabrik/System'   # Perl code
mkdir -p $dir  # A Metabrik Command, called for you with "run shell::command capture"
cp $os $dir  # Also A Metabrik Command, called for you with "run shell::command capture"
my $file = "$dir/Os.pm"  # Perl code
chmod 644 $file  # And a last Metabrik Command.
run core::context update_available

EDIT: There is now a Command to clone an existing Brik, so you may skip all these steps. Simply call clone Command like the following snippet. The new file will be created and you can start editing it:

run brik::tool clone system::os
vi $RUN

You can now modify the code of this Brik. When you want to test it, just use the reuse Command:

reuse

That’s all for today, happy coding 🙂

Building a Docker image to run Metabrik… from Metabrik

Today, we will show how we can use the system::docker Brik to build a Metabrik Docker image. Of course, you have to have installed Metabrik first, but it is as easy as following the online guide. Alternatively, you can use the publicly available Docker image, which is quite an inception concept.

Installing the Docker image from the hub

Installing Docker is also simple. Run the following command as a standard user:

wget -qO- https://get.docker.com/ | sh

Then, you may fetch the Metabrik Docker image and run it:

docker pull metabrik/metabrik
docker run -it metabrik/metabrik

And building your own image from your system

You may also start from a fresh Metabrik installation. If you want to customize your very own Docker image, you can download the Dockerfile from the trac server. Once you have it, put it in its own directory and start Metabrik.

$ mkdir ~/docker-metabrik
$ mv ~/Downloads/Dockerfile ~/docker-metabrik/
$ metabrik.sh
messiah:~>

Once Metabrik is running, you have to use the system::docker Brik and change to the docker directory.

messiah:~> cd ~/docker-metabrik
messiah:~/docker-metabrik> use system::docker
[-] system::docker: brik_check_require_binaries: binary [wget] not found in PATH
[-] system::docker: brik_preinit: brik_checks failed
[-] core::context: call: use: unable to use Brik [system::docker]

But you got an error, cause some dependencies are not available yet. No worries, there is a dependencies handling feature in Metabrik for every Brik. Load the brik::tool Brik and install the system::docker one like:

messiah:~/docker-metabrik> use brik::tool 
[-] core::context: call: use: Brik [brik::tool] already used
messiah:~/docker-metabrik> run brik::tool install $USE
[..]
messiah:~/docker-metabrik> use system::docker
messiah:~/docker-metabrik> run system::docker install
[!] system::docker: brik_init: you have to execute install Command now
apparmor is enabled in the kernel, but apparmor_parser missing
+ sh -c sleep 3; apt-get update
[..]

Ready to build the Docker image

Everything is set-up, we can start building.

get system::docker
set system::docker username $username
set system::docker password $password
set system::docker email $email
run system::docker login

run system::docker get_image_id metabrik
run system::docker delete $RUN # Delete previous latest tag
run system::docker build metabrik .
run system::docker tag $RUN metabrik/metabrik:latest

run system::docker push metabrik/metabrik:latest

Automating it from a Metascript

If you have to repeat this task, you will of course want to write a script. And you can do it as a Metascript. Create a file called build-metabrik-docker-image.meta containing:

use system::file
use system::docker

my $email = 'EMAIL'
my $username = 'USERNAME'
my $password = 'PASSWORD'

set system::docker email $email
set system::docker username $username
set system::docker password $password
run system::docker login
if ($ERR) { exit 0; }

get core::global homedir
my $dir = $GET."/metabrik-docker/"
run system::file mkdir $dir
get core::global repository
my $file = $GET."/../Dockerfile"
run system::file copy $file $dir

run system::docker get_image_id metabrik
run system::docker delete $RUN # Delete previous latest tag
run system::docker build metabrik $dir
run system::docker tag metabrik metabrik/metabrik:latest

run system::docker push metabrik/metabrik:latest

exit 1

And run it:

metabrik --script metabrik-docker-image.meta

 

Metabrik has been demonstrated at YAPC::Europe 2016 in Cluj-Napoca, Romania

We are pleased to have demonstrated the power of Metabrik during the YAPC::Europe 2016 (we should now say The Perl Conference) at Cluj-Napoca in Romania.

Slides are available here:

You may also be interested in the demo that we’ve shown:

And the video starts at 1:18:00 and last for 30 minutes:

Using just a single Brik for a quick program

Today, we will show you how to use just a single Brik to build a standalone program without installing a multitude of packages or modules. To achieve that, we will use the brik::tool which is a helper to install dependencies, et use the classic lookup::iplocation Brik as the single Brik to use.

Note: you will need version 1.22 for this to work.

Install or update The Metabrik Platform

Installing should be as easy as running:

sudo cpan install Metabrik
sudo cpan install Metabrik::Repository

If that does not work for you, you can follow more complete installation instructions.

Then, you can update the platform at any time to latest repository version with this one-liner:

perl -MMetabrik::Core::Context -e 'Metabrik::Core::Context->new_brik_run("brik::tool", "update")'

You could put an alias in your shell to help doing so in the future:

alias update-metabrik="perl -MMetabrik::Core::Context -e 'Metabrik::Core::Context->new_brik_run(
\"brik::tool\", \"update\")'"

Installing a Brik dependencies

As a Brik may have some package(s) or module(s) dependencies, we’ve put some effort into simplifying their installation. There is a brik::tool install Command dedicated to that. It will even know when to use sudo for you. Use this one-liner to install dependencies for our example lookup::iplocation Brik:

perl -MMetabrik::Core::Context -e 'Metabrik::Core::Context->new_brik_run("brik::tool", "install", "lookup::iplocation")'

You could also have used The Metabrik Shell by launching metabrik.sh:

messiah:~> run brik::tool install lookup::iplocation

Creating a meta-tool

Now everything is in place, you need to know which Commands are available for the lookup::iplocation Brik. The easiest way is to use The Metabrik Shell help Command. And don’t forget to use <tab> keystroke to use completion at each step:

messiah:~> use lookup::iplocation
[*] core::shell: use: Brik [lookup::iplocation] success
messiah:~> help lookup::iplocation
[+] set lookup::iplocation datadir <datadir>
[+] run lookup::iplocation from_ip <ip_address>
[+] run lookup::iplocation from_ipv4 <ipv4_address>
[+] run lookup::iplocation from_ipv6 <ipv6_address>
[+] run lookup::iplocation organization_name <ip_address>
[+] run lookup::iplocation subnet4 <ipv4_address>
[+] run lookup::iplocation update

Two Commands are of interest here: update and from_ip. The first one let’s you get latest versions of the Maxmind IP geolocation database.

You now know what you want to do, let’s use another brik::tool Command to create a meta-tool skeleton:

perl -MMetabrik::Core::Context -e 'Metabrik::Core::Context->new_brik_run("brik::tool", "create_tool", "lookup-iplocation.pl")'

Or from The Metabrik Shell by launching metabrik.sh:

messiah:~> run brik::tool create_tool lookup-iplocation.pl

And you populate the generated content skeleton with required code to call update and from_ip Commands:

#!/usr/bin/env perl
#
# $Id$
#
use strict;
use warnings;

my $ip = shift or die("$0 ");

# Uncomment to use a custom repository
#use lib qw(/lib);

use Data::Dumper;
use Metabrik::Core::Context;
use Metabrik::Lookup::Iplocation;

my $con = Metabrik::Core::Context->new or die("core::context");

# Init other Briks here
my $li = Metabrik::Lookup::Iplocation->new_from_brik_init($con) or die("lookup::iplocation");
$li->update or die("update failed");

# Put Metatool code here
print Dumper($li->from_ip($ip))."\n";

exit(0);

And voilà. Test your program:

perl lookup-iplocation.pl 93.184.216.34
[+] mirror: file [/home/gomor/metabrik/lookup-iplocation/GeoIPv6.dat.gz] not modified since last check
[+] mirror: file [/home/gomor/metabrik/lookup-iplocation/GeoIP.dat.gz] not modified since last check
[+] mirror: file [/home/gomor/metabrik/lookup-iplocation/GeoIPCity.dat.gz] not modified since last check
[+] mirror: file [/home/gomor/metabrik/lookup-iplocation/GeoIPASNum.dat.gz] not modified since last check
$VAR1 = {
          'country_code3' => 'USA',
          'metro_code' => 506,
          'city' => 'Norwell',
          'dma_code' => 506,
          'country_code' => 'US',
          'postal_code' => '02061',
          'country_name' => 'United States',
          'continent_code' => 'NA',
          'region_name' => 'Massachusetts',
          'longitude' => '-70.8228',
          'region' => 'MA',
          'area_code' => 781,
          'latitude' => '42.1508'
        };

Alternatively, for a so simple task, you could have used The Metabrik Shell:

messiah:~> use lookup::iplocation
[*] core::shell: use: Brik [lookup::iplocation] success
messiah:~> run lookup::iplocation update
messiah:~> run lookup::iplocation from_ip 93.184.216.34
{
  area_code      => 781,
  city           => "Norwell",
  continent_code => "NA",
  country_code   => "US",
  country_code3  => "USA",
  country_name   => "United States",
  dma_code       => 506,
  latitude       => 42.1508,
  longitude      => -70.8228,
  metro_code     => 506,
  postal_code    => "02061",
  region         => "MA",
  region_name    => "Massachusetts",
}

Conclusion

We have shown how to get up-to-date with The Metabrik Platform and how to develop a meta-tool. Helped with that, you can start to develop programs using the 200+ available Briks. For instance, try to add lookup::threatlist support to lookup-iplocation.pl as an exercise.

Malware analysis with VM instrumentation, WMI, winexe, Volatility and Metabrik

In this article, we will show how to take advantage of Metabrik to automate some malware analysis tasks. The goal will be to execute a malware in a virtual machine (VM), just after you saved a snapshot of Windows operating system. In our example, this snapshot only includes running processes, but you will see you can do more than just that. Here, we introduce remote::wmi, remote::winexe and system::virtualbox Briks.

We will also introduce the forensic::volatility Brik which can help you perform dynamic malware analysis and extract IOCs, for instance.

Tip: you can use <tab> keystroke to complete Brik names and Commands while using The Metabrik Shell.

Setting up the environment

wmic and winexe are programs that have to be compiled by yourself. Fortunately, Metabrik makes this process as easy as running the install Command. Since wmic and winexe programs ship with the same software suite, you just have to run install Command for one of remote::wmi or remote::winexe Briks. We don’t run the install Command with system::virtualbox Brik, because we suppose you already have some VitualBox VMs installed.

use brik::tool
use remote::wmi
use remote::winexe
use forensic::volatility
help remote::wmi
help remote::winexe
help forensic::volatility
run brik::tool install_needed_packages remote::wmi
run brik::tool install_needed_packages remote::volatility

screenshot-00002

Your VM also has to be configured to allow WMI accesses for a given user, and have the WINEXESVC service started. Some help on how to do that can be found in remote::wmi and remote::winexe Briks source code.

Starting a VM and taking a snapshot

Our environment is up and running. Let’s start a VM and take a snapshot before we execute a malware within it remotely. For the purpose of this exercise, the malware will simply be calc.exe program.

use system::virtualbox
help system::virtualbox
run system::virtualbox list

screenshot-00003

Let’s start our Windows machine in headless mode: we don’t want to speak with this kind of GUI.

set system::virtualbox type headless
run system::virtualbox start 602782ec-40c0-42ba-ad63-4e56a8bd5657
run system::virtualbox snapshot_live 602782ec-40c0-42ba-ad63-4e56a8bd5657 "before calc.exe"

screenshot-00004

 

I know the IP address of the machine, but you could have found it by using ARP scanning on vboxnet0 interface thanks to the network::arp Brik.

my $win = '192.168.56.101'
my $user = 'Administrator'
my $password = 'YOUR_SECRET'
set remote::wmi host $win
set remote::wmi user $user
set remote::wmi password $password
set remote::winexe host $win
set remote::winexe user $user
set remote::winexe password $password
run remote::wmi get_win32_process
for (@$RUN) {
print $_->{Name}."\n";
}

You should see no calc.exe right now.

screenshot-00005

Now, launch the calc.exe program and search in the process list if you can find it. Note that you will have to run Ctrl+C keystrokes because the program will block here. But calc.exe should still be running on the remote host.

run remote::winexe execute "cmd.exe /c calc.exe"
run remote::wmi get_win32_process
my @processes = map { $_->{Name} } @$RUN
my $found = grep { /calc.exe/ } @processes

In the below screenshot, you will see 2 as a result to the grep command. That’s because we ran two times the execute Command with calc.exe during our testing.

screenshot-00006

Now, we will restore the VM to its default state, when calc.exe “malware” was not yet run.

run system::virtualbox stop 602782ec-40c0-42ba-ad63-4e56a8bd5657
run system::virtualbox snapshot_restore 602782ec-40c0-42ba-ad63-4e56a8bd5657 "before calc.exe"
run system::virtualbox start 602782ec-40c0-42ba-ad63-4e56a8bd5657
run remote::wmi get_win32_process
my @processes = map { $_->{Name} } @$RUN
my $found = grep { /calc.exe/ } @processes

screenshot-00008
All clear. No more calc.exe process.

You spoke about Volatility?

Yes. And that’s where it starts to get interesting. You can do the same processes analysis with Volatility (and of course much more). To use Volatility, you need a dump of the system’s memory. To acquire this dump, it’s as simple as using the system::virtualbox dumpguestcore Command. Then, you have to extract the memory dump that is part of the generated core file. You will use the extract_memdump_from_dumpguestcore Command.

Then, you will be able to perform forensic stuff on this memory dump, for instance to search if calc.exe has been popped. If you go back to the original subject -malware analysis-, you will find the Volatility is the tool of choice to check what a malware you just run with remote::winexe Brik did to processes, network handles or registry. That’s a perfect combination of tools to extract IOCs from a malware.

run system::virtualbox dumpguestcore 602782ec-40c0-42ba-ad63-4e56a8bd5657 dump.core
run system::virtualbox extract_memdump_from_dumpguestcore dump.core dump.volatility

screenshot-00010

 

EDIT: on some versions of VirtualBox, you will have to use the dumpvmcore Command instead of dumguestcore.

We have a dump usable by Volatility. Let’s dig into it with forensic::volatility Brik:

use forensic::volatility
set forensic::volatility input dump.volatility
run forensic::volatility imageinfo
set forensic::volatility profile $RUN->[0]
run forensic::volatility pslist

screenshot-00011

screenshot-00012

And voilà.

A feature of WINEXESVC: get a remote shell on Windows

One last screenshot in regards to remote::winexe Brik: how to get a Windows remote shell:

run remote::winexe execute cmd.exe

screenshot-00009

Conclusion

We have seen that we can easily perform malware analysis on a Windows machine by using a combination of Briks. By combining features of different tools (VirtualBox, winexe and Volatility) we can, for instance, analyse consequences of running a malware on a machine. Extracting IOCs from a malware is something useful if you want to find which machines were infected on your information systems from a particuliar sample. You could then use remote::wmi Brik to scan your network for these specific patterns.

Extracting IOCs is a huge topic in itself, and we just scratched the surface here by using a dynamic method associated with a “scapegoat” VM. Another way of extracting IOCs is to use static analysis, but that’s a complete different story.

We urge you to play with Volatility (and of course Metabrik), you will see how powerful it could be. Enjoy.

Solving a root-me forensic challenge with Metabrik and Scalpel

I recently discovered the wonderful world of forensic and challenges (read: today). So I decided to add some new Briks just to solve some of them. Let’s dig into the “Find the catroot-me challenge step-by-step. I will also show how I improved the Scalpel tool by wrapping it with other Briks.

Getting the files

Well, you have to register to http://www.root-me.org/ to get access to challenge files. The one from this Metabrik Example is called “Find the cat“, or “Trouvez le chat” in French. Once downloaded and extracted, you got these files:

cd /home/gomor/hgwork/metabrik/challenges/trouvez-le-chat/
ls
my $files = $RUN

First things to do is to check for MIME types of these files. You can also check for the MAGIC types:

use file::type
run file::type get_mime_type $files
run file::type get_magic_type $files

screenshot-001

You have a txt file (you should read it for the story behind the cat theft) and a gzip file. Let’s uncompress this one:

use file::compress
run file::compress uncompress $files->[0]
my $file = $RUN
run file::type get_mime_type $file
run file::type get_magic_type $file

screenshot-002

Wonderful. It appears to be some kind of filesystem. Let’s analyse that with Scalpel, a filesystem image forensic tool written in Python.

Introducing Scalpel

To use Scalpel, you usually have to create a scalpel.conf file containing metadata on how to extract (or carve) files. For instance, if you want to find and extract ZIP files, you search for the string PK\x03\x04 in a bytestream.

If you want to only check for some files (say odt files), you have to comment out all the lines from that conf file, except the one for odt documents. If you want to search for all file formats, you have to uncomment all the lines. That is the first thing I changed when writing the forensic::scalpel Brik: use a Command to help you generate the configuration file with only what you want to search for.

The other limitation of the tool is its inhability to identify extracted files using the libmagic (check for MIME types or MAGIC types). Thus, I added this feature within forensic::scalpel Brik to just do that: separate verified files from unverified ones.

So, let’s dig into this new Brik. It is made upon 4 existing Briks: shell::command, file::find, file::text and of course the file::type one. As you can see, new Briks can be written with already existing Briks.

We will use it to extract files from the challenge (remember, you have to find where is the cat). Because I already know the result (spoiler), I will only search for odt files from the filesystem image ch9:

use forensic::scalpel
run forensic::scalpel generate_conf "[ 'odt' ]"
run forensic::scalpel scan $file
my $verified = $RUN->{verified}

We just wanted to keep verified files. Yes, the ones that went through the file::type Brik and its MIME type identification. Read: the feature lacking from Scalpel.

screenshot-004

We now have two odt files, let’s open them with LibreOffice (exercise for the reader).

my $files = join(' ', @$verified)
libreoffice $files

You have a picture of a cat, and a message saying: “Free Alsace, or we kill the cat“. We must locate those miscreants. This is a picture, is there some EXIF metadata? Use the image::exif Brik to discover. But first, we have to extract this picture. odt files are simple ZIP files, we use the file::compress Brik again:

run file::compress uncompress $verified->[0]
my $pic = './Pictures/1000000000000CC000000990038D2A62.jpg'
run file::type get_mime_type $pic

screenshot-005
use image::exif
run image::exif get_metadata $pic

screenshot-006

Lots of metadata. But more interestingly, you have the latitude and longitude of the camera which took the picture. Bingo. Sarge, we found the evil doers, let’s go catch them now. Over.

Conclusion

By using a few Briks, we have shown how to solve a simple challenge. As you can see, you can nearly automate it from the beginning to the end. Exercise yourself with Metabrik by downloading the docker image. Enjoy.

EDIT 2015/12/17: some Commands have moved from system::file to file::type, thus some Commands have been renamed in this post, but screen captures remain the same as before.

Is Telegram using encryption? How to discover it easily by yourself – Part 1

It is said that, by default, messages sent to a contact through Telegram -a messaging application for smartphones- are not encrypted. You have to enter a specific menu named “New Secret Chat” to enable end-to-end encryption. Let’s verify it is indeed the case by using some Briks.
 
Try by yourself using the Docker image:

docker pull metabrik/metabrik
docker run -it metabrik/metabrik

Let’s load some Briks for the work

We will need to perform a Man-in-The-Middle (MiTM) attack on our local network to allow interception of traffic from a smartphone to Telegram servers or remote peers. The network::arp Brik has such a function. We will also need to become a router, or the traffic will be lost: network::route comes to the rescue. Then we will have to analyse the traffic itself, we will use Briks network::read, network::stream and client::whois to locate Telegram IP addresses. We will also use lookup::oui to find a potential smartphone on the network.

use network::arp
use network::route
use network::read
use network::stream
use lookup::oui
use client::whois

screenshot-002
Also, you have to execute an update Command on the lookup::oui Brik so it fetches the file from IEEE organization.

run lookup::oui update

Performing the MiTM attack

We will use ARP poisoning to perform a standard LAN MiTM attack. But we don’t want to poison everyone, we just want to listen to a smartphone traffic. We will use some ARP scanning technics to gather available neighbors, and we will perform a lookup on the MAC address to retrieve the vendor. This information will directly lead us to a smartphone.

run network::arp scan
my $scan = $RUN
my $mac = [ keys %{$RUN->{by_mac}} ]
run lookup::oui from_hex $mac->[0]
run lookup::oui from_hex $mac->[1]

screenshot-004
Looks like we have found a Motorola smartphone. Perfect target for us. To gather its IP address, just issue a Command to ask data from a saved variable:

my $victim = $scan->{by_mac}{"5c:51:88:XX:XX:XX"}

screenshot-005
Now, we want to intercept traffic between the victim and the Internet. Thus, we will attack the gateway. We have to find its IP address, configure our host as a network router, and we will be ready to perform the ARP poisoning:

my $victim = "192.168.1.20"
run network::route default_ipv4_gateway
my $gateway = $RUN
run network::route enable_router_ipv4
run network::arp full_poison $victim $gateway

screenshot-006

Conclusion

We have seen how to scan the local network in search for a specific device and how to launch a Man-in-The-Middle attack. This concludes the first part of this article. You may think it is a little bit short, but you will probably be eager to read the next part 🙂

Metabrik Core And Repository 1.10 Released

Following our lightning talk from Hack.lu 2015 conference, we are proud to release the version 1.10 of Metabrik Core and Repository. Update using Mercurial or follow the installation procedure.

You can find the few slides which were presented at the following link.

Lots of new awesome Briks

We added many Briks for this new release, here is the description for them:

  • api::bluecoat: play with Bluecoat REST API
  • api::splunk: play with the Splunk REST API
  • api::virustotal: play with Virustotal REST API
  • client::udp: a UDP socket client (UDP netcat)
  • client::ssl: check various stuff about a SSL/TLS connection
  • client::rest: the base REST client for use with Briks from API Category
  • client::rsync: a wrapper around rsync program
  • client::twitter: a Twitter client
  • database::mysql: interract with MySQL databases
  • file::dump: read and write dump files
  • file::hash: genrated various digests from files
  • file::ole: play with Microsoft files that embed OLE components
  • lookup::iplocation: geolocation for IP addresses
  • string::ascii: convert ASCII characters
  • string::csv: encode/decode CSV strings
  • string::hostname: parse a FQDN
  • string::regex: experiment with regexes
  • system::freebsd::pf: control Packet Filter
  • system::freebsd::jail: control jails

Just type help <Brik> to know more:

Meta:~> use string::regex 
[*] core::shell: use: Brik [string::regex] success
Meta:~> help string::regex 
[+] run string::regex encode <$regex|$regex_list>

Complete list of changes

Core

1.10 Tue Oct 27 20:13:36 CET 2015
   - FEATURE: core::context: allows to pass complex structs arguments to run and set Commands
     Example: run network::arp scan $info->{subnet}
   - FEATURE: core::context: allows also to execute Perl code within an Argument of a
     run Command
     Example: run client::dns ptr_lookup "[ map { @$_ } values %$RUN ]"
   - FEATURE: core::shell: allows to complete aliases (can be disabled via
     aliases_completion Attribute
   - FEATURE: shell::command: use_sudo Attribute to launch sudo on executing external command
   - FEATURE: shell::command: file globbing enabled with capture Command
   - UPDATE: moved attributes_default() from brik_use_properties to brik_properties when
     there is no need to use $self. It allows instanciated Attributes inheritage to work.
   - UPDATE: shell::command: do not print STDERR when using capture Command when there is no
     STDERR string captured.
   - new: shell::command: execute Command to use capture_mode Attribute to launch either
     capture or system Command
   - bugfix: core::context: save_state to use Metabrik brik_attributes Command to correctly
     retrieve all Brik Attributes even those inherited
   - bugfix: core::shell: display error on chdir() failure
   - bugfix: core::shell: escapes " character when executing a multiline Perl/Metabrik Code
             example:
             my $test = 'root'
             for (0..1) {
                'run shell::command system "ls /$test"'
             }
   - bugfix: Metabrik: error checking within new_from_brik_init Command
   - bugfix: Metabrik: logging correctly on class calls to _log_*()

Repository

- bugfixes and new Briks

20151011
   AFFECT: network::arp

   - network::arp scan Command now returns a hashref with results sorted
     with keys named by_mac, by_ipv4 and by_ipv6

20151003
   AFFECT: network::rsync

   - network::rsync renamed to client::rsync

20150418
   AFFECT: crypto::x509

   - Argument order changed for ca_sign_csr and cert_verify Commands

20150322
   AFFECT: file::csv

   - removed get_col_by_name and get_col_by_number obsolete Commands