Quantcast
Channel: Forensic Focus – Articles
Viewing all 350 articles
Browse latest View live

NIST Tests Forensic Methods For Getting Data From Damaged Mobile Phones

$
0
0

by Richard Press

Criminals sometimes damage their mobile phones in an attempt to destroy evidence. They might smash, shoot, submerge or cook their phones, but forensics experts can often retrieve the evidence anyway. Now, researchers at the National Institute of Standards and Technology (NIST) have tested how well these forensic methods work.

A damaged phone might not power on, and the data port might not work, so experts use hardware and software tools to directly access the phone’s memory chips. These include hacking tools, albeit ones that may be lawfully used as part of a criminal investigation. Because these methods produce data that might be presented as evidence in court, it’s important to know if they can be trusted.

“Our goal was to test the validity of these methods,” said Rick Ayers, the NIST digital forensics expert who led the study. “Do they reliably produce accurate results?”

The results of the NIST study will also help labs choose the right tools for the job. Some methods work better than others, depending on the type of phone, the type of data and the extent of the damage.

The study addresses methods that work with Android phones. Also, the study covered only methods for accessing data, not decrypting it. However, they can still be useful with encrypted phones because investigators often manage to get the passcode during their investigation.

To conduct the study, NIST researchers loaded data onto 10 popular models of phones. They then extracted the data or had outside experts extract the data for them. The question was: Would the extracted data exactly match the original data, without any changes?

For the study to be accurate, the researchers couldn’t just zap a bunch of data onto the phones. They had to add the data the way a person normally would. They took photos, sent messages and used Facebook, LinkedIn and other social media apps. They entered contacts with multiple middle names and oddly formatted addresses to see if any parts would be chopped off or lost when the data was retrieved. They added GPS data by driving around town with all the phones on the dashboard.

Credit: R. Press/NIST
NIST computer scientist Jenise Reyes-Rodriguez uses the JTAG method to acquire data from a damaged mobile phone.

After the researchers had loaded data onto the phones, they used two methods to extract it. The first method takes advantage of the fact that many circuit boards have small metal taps that provide access to data on the chips. Manufacturers use those taps to test their circuit boards, but by soldering wires onto them, forensic investigators can extract data from the chips. This is called the JTAG method, for the Joint Task Action Group, the manufacturing industry association that codified this testing feature.

Chips connect to the circuit board via tiny metal pins, and the second method, called “chip-off,” involves connecting to those pins directly. Experts used to do this by gently plucking the chips off the board and seating them into chip readers, but the pins are delicate. If you damage them, getting the data can be difficult or impossible. A few years ago, experts found that instead of pulling the chips off the circuit board, they could grind down the opposite side of the board on a lathe until the pins were exposed. This is like stripping insulation off a wire, and it allows access to the pins.

“It seems so obvious,” said Ayers. “But it’s one of those things where everyone just did it one way until someone came up with an easier way.”

The chip-off extractions were conducted by the Fort Worth Police Department Digital Forensics Lab and a private forensics company in Colorado called VTO Labs, who sent the extracted data back to NIST. NIST computer scientist Jenise Reyes-Rodriguez did the JTAG extractions.

Credit: R. Press/NIST
Digital forensics experts can often extract data from damaged mobile phones using the JTAG method.

After the data extractions were complete, Ayers and Reyes-Rodriguez used eight different forensic software tools to interpret the raw data, generating contacts, locations, texts, photos, social media data, and so on. They then compared those to the data originally loaded onto each phone.

The comparison showed that both JTAG and chip-off extracted the data without altering it, but that some of the software tools were better at interpreting the data than others, especially for data from social media apps. Those apps are constantly changing, making it difficult for the toolmakers to keep up.

The results are published in a series of freely available online reports. This study, and the resulting reports, are part of NIST’s Computer Forensics Tool Testing project. Called CFTT, this project has subjected a wide array of digital forensics tools to rigorous and systematic evaluation. Forensics labs around the country use CFTT reports to ensure the quality of their work.

“Many labs have an overwhelming workload, and some of these tools are very expensive,” Ayers said. “To be able to look at a report and say, this tool will work better than that one for a particular case — that can be big advantage.”

This research was funded by NIST and the Department of Homeland Security’s Cyber Forensics Project. Background information is available on the CFTT website, and the JTAG and chip-off reports are available on the DHS website.


Forensic Pattern Of Life Analysis

$
0
0

by Christa Miller, Forensic Focus

Pattern of life analysis isn’t a new concept to anyone who’s ever been involved with intelligence, in particular surveillance.  It’s all about the habits that people — suspects, persons of interest, crime victims, or those connected to any of the above — carry out in day-to-day life.

When it comes to digital devices, how users interact with them can tell a very detailed story about any given timeframe. There are two reasons for doing this. One, as Brett Shavers outlined in a blog post last year, is to tie a particular device to a user — more of an issue for a computer or tablet than a smartphone.

The second reason is to show what’s normal, so that investigators can key in on what’s not normal. Those departures are a starting point for why a person did something differently on a given day. For example, in a 2019 Florida case, health data, time-stamped photos, call logs, and GPS coordinates extracted from an iPhone XR refuted two suspects’ alibis.

Corroborated with evidence from the suspects’ social media accounts, the iPhone data placed the suspects at the crime scene in the same timeframe the murder took place. This enabled police to lay second-degree murder charges against the pair.

In a forensic context, pattern of life analysis is coming more front and center as investigators realize how much data — battery usage; device connections to Bluetooth, wireless networks, and vehicles; data consumption; health information; and more —  is available to plot a virtually minute-by-minute description of how a person spent a given day or even week.

The investigative value of this kind of information is profound — and so are its privacy implications. What do you need to know?

Analyzing patterns of life

In a SANS Institute webcast Sarah Edwards, senior digital forensics researcher at BlackBag Technologies and a SANS Senior Instructor specializing in Apple product forensics, described different kinds of data and how it might be useful to specifically criminal investigations, such as distracted driving incidents, harassment, drug overdoses, dumped bodies, and literal smoking guns, as well as evidence deletion or antiforensic measures such as adding a passcode after deleting evidence.

  • Health data such as heart rate and steps or distance can show exercise patterns or suspect movements, as well as correlating weather and geolocation information at a given day and time. Edwards cautioned that sampling might include data from the Apple Watch and/or the phone’s pedometer.
  • The RoutineD database tracks where a user is located at any given point in time, showing which places a user visits often. These can be mapped and are granular, but Edwards cautioned that the data is only fairly accurate — possibly enough to introduce reasonable doubt.
  • The KnowledgeC database offers about 4 weeks’ worth of information about what apps have been used at any given point in time — even if they were deleted — and for how long. This data can be correlated to other databases, and media usage is tracked here too.
  • Apple CarPlay-related data is also tracked, including steps walked through a garage or parking lot, vehicle connection, location and routine information, and even health data such as an increase in heart rate in heavy traffic.
  • Wallet transactions through Apple Pay could be tracked, associating locations with transactions (though Edwards cautioned that transaction information isn’t specific). These can be more historical than other databases, tracking significant data points potentially over several years.

Pattern of life analysis can be useful for corporate investigations, too. For example, Edwards says, exfiltration of corporate data could involve a string of actions. “If I’m on their iOS device and I see that they took a picture with the camera, they saved the picture, they maybe uploaded it to Dropbox, [and] sent a message to a competing company over whatever secure chat messenger there might be… you’re putting that series of events together to tell a story,” she explains.

Problem solving with patterns of life

Link analysis is a concept some vendors including Oxygen Forensics, MSAB, and Cellebrite integrate in their tools. But the kind of pattern-of-life analysis Edwards researches extends far beyond communications and contacts, to device usage — effectively extending the concept outside of smartphones, to include the Internet of Things: connected vehicles, smart homes, and personal health devices among others.

“I see pattern of life analysis as a logical extension of [timeline creation and analysis in] digital forensics,” says Alexis Brignoni, a digital forensics researcher and blogger. Mobile device-specific artifacts, he adds, “… can tell investigators about intent and purpose, the truthfulness of an alibi, or a clear understanding on what happened and when.”

Brignoni is quick to point out that pattern of life analysis can be used to exonerate as well as to incriminate. “It is impossible to commit assault when the accused is hundreds of miles away at the day and time of the alleged event,” he explains.

On the technical side, Edwards’ research began as a way to tell a case story through analyzing many databases at one time. Analyzing only one database at a time, she explained, means the investigation can lose context, missing important connections.

“I was spending too much time querying 20-something databases [with] a million-plus different records…. That is too much for any single person to go through, you’re not going to be scrolling through [it],” she says.

In contrast, combining the records — the actions the user took —“… can really tell a story about how the user uses their device,” Edwards says. “You can look at third-party apps all day long, but how does the user use that app? Are they using it once [or] are they using it consistently?… Just getting that context can bring a lot into different investigations.”

It also doesn’t limit investigations only to communications or links between people. What people do when they aren’t in touch with others can have enormous investigative value. “If they’re using an application all the time and then all of a sudden they stop… that anomaly is not part of their pattern anymore, so that would be perhaps significant to an investigation,” says Edwards.

Brignoni agrees. “[A] murder victim’s work from home pattern of activity on a computer can be key since a lack or stop of expected digital activity might be indicative of a possible time of death,” he says. The absence of data in general, whether consistent or inconsistent demonstrating periods when the device isn’t being used at all, can be more broadly important.

Answering those questions is what determines the usefulness of a particular database or databases to a given case. “It’s figuring out… what pieces of data can help move [your investigation] forward, what questions do you need to answer,” says Edwards.

For example, locations could be recorded in different databases, but not all for the same time period. A database might also only store some locations, but not all; or the location itself might not be precise. Edwards also cautioned that Apple’s algorithm, which runs to determine what events are significant, could fragment data further across different databases.

To help with all of these issues, Edwards developed the Apple Pattern of Life Lazy Outputter (APOLLO), to access the most useful queries, which Edwards says is a matter of what the investigator needs to answer their questions — even if seemingly trivial, like whether the flashlight was used, or the device was plugged in.

Examiners can further filter down by time, app, or scenario. This is important because of the millions of records. “You’re going to be pivoting off some piece of data: a contact, an application, a moment in time,” Edwards says.

As a SQLite utility, APOLLO isn’t meant just for iOS. It’s also possible to run Android or Microsoft databases through it, as long as they’re written in SQL and you can write query for it. Edwards built it this way to make it easy to contribute to, and in fact, APOLLO ingests and normalizes Android XML UsageStats data, along with additional artifacts, from Brignoni’s Android Review Timeline Events Modular Integrated Solution (ARTEMIS).  

Although APOLLO isn’t “pretty” and doesn’t offer visualizations, it’s available as a BlackLight plug-in, which makes it easier to look through columns and records. The plugin can be reloaded into BlackLight with each update, without having to download a new version of BlackLight. 

Technical and legal issues to watch out for

On the technical side, Edwards’ SANS webcast outlined the problems she encountered:

  • Getting to the data. The kinds of data needed for a solid pattern of life analysis requires a specific type of data extraction — not an iTunes backup, which offers only limited data such as a health database, sysdiagnose dump, and powerlog information — but rather, extractions available from third-party labs, GrayKey, or jailbreaks, including the recent Checkra1n exploit available in forensic tools. 
  • Data correlation. A full file system extraction from an iOS system results in data from so many databases, correlating them all can take time — as can researching each database to learn how to interpret the data within them. In addition, says Brignoni, when a key database is missing, “[H]aving additional sources of data that mirror UsageStats becomes key.”
  • Analysis time. The databases, said Edwards, are “consistently inconsistent; you never know what you’re going to get.” For example, depending on the database:
    • Timestamps could be offered in Unix, Mac, or epoch format, or offset.
    • Most databases are temporal, enabling the examiner to sort their records by time. Others, like the aggregate dictionary, aren’t temporal, but contain valuable data nonetheless — for example, how many times touch ID was used on the device.
    • Units of measurement are sometimes documented, but sometimes not. In these cases, rather than guess, Edwards stresses the importance of research and testing.
    • 1’s and 0’s don’t necessarily mean “on” or “off” respectively; Edwards said sometimes she’s even seen a data value of 2 on occasion!
    • Usage stats don’t track content, so for example, the KnowledgeC database will show “intent and activities” such as composing a message, but not what its content was. For that, an examiner would have to go into the artifacts for the specific app.

Moreover, said Edwards, database schemas change every year, which changes queries as a result. That’s a problem vendors have, too, but open-source tools like APOLLO and ARTEMIS have the chance to be updated much more rapidly with community support.

Most of all, both Edwards and Brignoni stress the need to test database information because some information might be misleading. In his KnowledgeC blog post from October 2019, Mike Williamson highlighted how this could be the case with “Now Playing” media as an example.

Tool testing and validation takes time, of course, and the processes are likely to differ between criminal and corporate forensic labs. But Brignoni offers two solutions.

First, he says, “Having a dedicated testing program can speed up validation where the individual examiner tasked with a case can then dedicate her validation efforts to only the most critical parts of the digital examination process and artifacts at hand.

“Another way of speeding the artifact testing process is to use multiple tools to both validate and deepen analysis,” he adds. “The purpose again being the narrowing of manual verification by the examiner to those key artifacts in the investigation.”

On the legal side, Brignoni calls out pattern of life artifacts’ inherent intrusiveness. “How wide or narrow the artifact collection window will be depends on a combination of legal authority and investigative needs,” he says. 

Even then, where legal authority lands could be in question. Following a 2012 case, United States v. Jones, 565 U.S. 400 (2012), and later, as Carpenter v. United States, 585 U.S. (2018) made its way through the courts, legal experts debated whether a search of the “mosaic” of personal data could require a search warrant of its own.  

The crux of that argument was the fact that even within a limited timeframe, police can’t differentiate between the activities they’re investigating, and constitutionally protected activities such as attending church.

Matthew Osteen, General Counsel and Cyber and Economic Crime Attorney for the National White Collar Crime Center (NW3C), says Carpenter embraces data aggregation — but only for a single type of data. At the heart of the issue, he adds, is “third-party doctrine,” or the notion that users give up certain privacy rights when they use a third-party service, like a utility company.

That’s why U.S. law enforcement historically hasn’t needed to get a search warrant for phone records, for instance — only a court order or subpoena, depending on the level of information they need.

Carpenter, says Osteen, “is being treated as a categorical exception to the third-party doctrine” because of the degree of pattern-of-life data that cell towers can offer. Many federal, state, and local agencies in the U.S. advise investigators to obtain a search warrant for all CSLI, even if the aggregated data falls below the seven-day threshold defined in Carpenter.

When it comes to aggregating multiple data sources, Osteen says, “It may be that courts will look at all types of data collected in totality as a single collection or it might view each type of data collected as distinct collections.”

Either way, the data collection time period makes a difference. One hour of heart rate and location information data — treated as distinct collections — might not be enough for an exemption from third-party doctrine, but a week’s worth of the same data likely would be.

On the other hand, a court looking at one hour’s worth of aggregated heart rate and location information might conclude that “mosaic” is a violation of privacy. “At this juncture it’s hard to say how a court would view that data,” says Osteen, “especially when some types of data, i.e. health data, is more voluntarily turned over to service providers than [CSLI or] other types.”

These issues highlight the importance of talking regularly with prosecutors in your jurisdiction, not only around geolocation information, but also around the degree of information you’re working with.

For more detailed technical information about APOLLO and its artifacts, Edwards’ mac4n6 blog lists the entire series of her initial research. You can also listen to her archived SANS webcast or her on-demand BlackBag webinar on the topic.  

If you’d like to contribute to either APOLLO or ARTEMIS, find them both at GitHub or reach out directly to Sarah Edwards or Alexis Brignoni on Twitter.

Chromium-Based Microsoft Edge From A Forensic Point Of View

$
0
0

by Oleg Skulkin & Svetlana Ostrovskaya

Recently Microsoft finally released the Chromium-based version of Edge Browser, so it seems we’ll miss ESE databases soon (not). Of course, it may have a similar set of forensic artifacts to Chromium or Chrome, but we must check it anyway. What’s more, the browser is available not only for Windows, but also for macOS, Android and iOS. 

On Windows, Edge data is available under the following location:

C:\Users\%USERNAME%\AppData\Local\Microsoft\Edge\User Data\Default

Let’s start from bookmarks or “favorites”. They are stored in a JSON file with the same name – Bookmarks. You can open it with any text editor. The timestamps are stored in WebKit format – a 64-bit value for microseconds since Jan 1, 1601 00:00 UTC. 

Cache is stored in the Cache subfolder and consists of an Index file (index), Data Block files (data_#) and data files (f_######). You can easily parse these files with ChromeCacheView by NirSoft:

Microsoft Edge cache parsed with ChromeCacheView

Cookies are stored in an SQLite database called Cookies. We need the cookies table, here is the query:

Microsoft Edge cookies

As you can see, we can easily convert timestamps in WebKit format with datetime function.

Information about files downloaded with Microsoft Edge is available in the History SQLite database. You can get it from the downloads table:

Microsoft Edge downloads

One more useful table here is urls. Again, you can use a simple query to obtain information about visited sites and timestamps:

Microsoft Edge visited sites

Edge stores autofill information such as profiles, locations and card numbers in the Web Data database. Saved credentials are stored in the Login Data database. You can find URLs and associated login data in the logins table.

However, all of the passwords are encrypted. For decryption you can try ChromePass by Nirsoft. This tool allows you to recover passwords from a running system or external drive. There is no need to mention how easily you can mount your evidence item, e.g. with FTK Imager and use it as an external drive. The only thing you will need is the Windows profile password.

ChromePass settings

As a result you will be able to get such information as Origin and Action URLs, User Name, Password in plain text and its creation date.

Microsoft Edge saved credentials

Progressive Web Applications (PWA) is one of the top features of the Edge browser. It allows you to “install” any website on your device as a web application. In fact, there is msedge_proxy.exe that gets profile directory and application ID as arguments and runs an application shell (static template) to load necessary dynamic content from the URL described in the Manifest.

Installed webpage shortcut

The manifest file is stored under the Extensions\<App_ID> subfolder. 

Microsoft Edge extensions and applications

The same folder contains the source code of the newly added extensions. Each extension has its own subfolder named by the unique ID.

On Mac OS Edge files are pretty similar and can be found under:

/Users/%USERNAME%/Library/Application Support/Microsoft Edge/Default

Microsoft Edge profile directory

As you can see, information about bookmarks, visited URLs, downloads, cookies and so on is stored in the corresponding files and SQLite databases, so the previously described techniques could be used to obtain this data. 

Note that on Mac OS, cache is stored separately in the /Users/%USERNAME%/ Library/Caches/Microsoft Edge/Default/Cache folder. However, you still can use ChromeCacheView to parse it.

Our next stop is iOS. All of the Edge files are stored under: 

/private/var/mobile/Containers/Data/Application/<UUID>

Therefore, you need to match the UUID to Microsoft Edge. How to do it? Quite easy! All you need is applicationState.db located under /private/var/mobile/Library/FrontBoard/.

Let’s start from finding the right ID in the application_identifier_tab table. In our case, ID of com.microsoft.msedge is 121. Now we can look at the kvs table and filter the application_identifier column using the ID we just found. The value column contains binary plists we need to export, DB Browser for SQLite can be used to solve this task, for example. Once exported, it can be examined with your favorite plist viewer:

Exported binary plist contents

Now we know that Microsoft Edge’s UUID is 565EC255-F158-48E1-83C5-D426BC60D22D, so we can easily find application data.

First, you may want to check the OfflineCache SQLite database that keeps the history of visits and placed at the Documents subfolder. Visited URLs with the Apple NSDate formatted timestamps are stored in the ZONLINESEARCHHISTORY table and can be obtained with the following query:

Microsoft Edge browsing history

The OfflineCache database also stores added bookmarks and data saved in the browser, so you can check them as well using the same DB Browser for SQLite.

In addition to history of visits you can check the Library/Caches/WebKit/NetworkCache/Version 14/Records/ <Website_ID>/Resource subfolders to get a slight idea about downloaded content.

Microsoft Edge network cache

As you can see, there are different files and blob objects that could be opened with any text editor. If you are lucky, you can find some blobs with magic bytes and obtain the downloaded content itself:

Downloaded picture

Another useful location is the /Library/Cookies/ subfolder. Here you can find Cookies.binarycookies file that can be parsed with EdgeCookiesParser

Cookies.binarycookies parsed with EdgeCookiesParser

Last but not least is Android. The way of keeping Microsoft Edge’s data is identical to Windows and Mac OS. All necessary files and SQLite databases you can find in the /data/data/com.microsoft.emmx/app_chrome/Default folder. Cache is stored under /data/data/com.microsoft.emmx/cache/Cache location and can be parsed with ChromeCacheView. 

As you can see, extraction of most important browsing data is possible with a few quite simple SQL-queries. As we are dealing with SQLite databases, you should not forget about free lists and unallocated space – it may uncover even more artifacts, which may contain the key to your investigation.

About the Authors

Oleg Skulkin is senior digital forensic analyst at Group-IB, one of the global leaders in preventing and investigating high-tech crimes and online fraud. He holds a number of certifications, including GCFA, GCTI, and MCFE. Oleg co-authored Windows Forensics Cookbook, Practical Mobile Forensics and Learning Android Forensics, as well as many blog posts and articles on digital forensics and incident response you can find online.

Svetlana Ostrovskaya is digital forensic trainer at Group-IB, one of the global leaders in preventing and investigating high-tech crimes and online fraud. She co-authored many training programs, including Windows Memory Forensics, Advanced Windows Forensic Investigations and Windows Incident Response and Threat Hunting.

Everything You Ever Wanted To Ask About Checkm8 And Checkra1n

$
0
0

by Oxygen Forensics 

What’s Checkm8?

Checkm8 is an exploit (program exploiting OS or hardware vulnerabilities) aimed at obtaining access to the execution of its own software code at the earliest stage of iOS device loading.

What makes it stand out?

The richness, and honestly the hype, surrounding Checkm8 is that the vulnerability on which it is based cannot be patched by software (update or change) as it is incorporated in code from read-only memory, which cannot be rewritten, at the stage of manufacturing a device chip. This means that all iOS devices prone to this vulnerability will always remain vulnerable, regardless of the iOS version.

What are the limitations?

The exploit is only executed in Random Access Memory. This means that after switching off or restarting the device, it will load in normal mode and the investigator would have to execute checkm8 again.

Using Checkm8, it is not possible to bypass a password or quickly crack it since the procession of password, biometric data and the data encryption based on them are performed within the secure enclave processor, which checkm8 has no access to.

List of supported devices

Devices prone to the vulnerability:

  • All devices based on processors: s5l8940x (A5), s5l8942x (A5 Rev A), s5l8945x (A5X), s5l8947x (A5 Rev B), s5l8950x (A6) , s5l8955x (A6X), s5l8960x (A7), t8002 (including S1P and S2), t8004 (S3), t8010 (A10), t8011 (A10), t8015, (A11), s5l8747x (Haywire video adapters processor), t7000 (A8), t7001 (A8X), s7002 (S1), s8000 (A9), s8001 (A9X), s8003 (A9) and t8012 (used in iMac Pro);
  • All iPhones from iPhone 4S to iPhone X;
  • iPad 2, iPad (3rd generation), iPad (4th generation), iPad (5th generation), iPad (6th generation), iPad (7th generation);
  • iPad Air and iPad Air 2;
  • iPad Pro (12.9-inch), iPad Pro (9.7-inch), iPad Pro (12.9-inch) (2nd generation), iPad Pro (10.5-inch);
  • iPad mini, iPad mini 2, iPad mini 3 и iPad mini 4;
  • iPod touch (5th generation), iPod touch (6th generation), iPod touch (7th generation);
  • Apple Watch Series 1, Apple Watch Series 2 and Apple Watch Series 3;
  • Apple TV (3rd generation), Apple TV (4th generation) and Apple TV 4K.

Devices supported by checkm8 exploit:

  • Currently the exploit is adapted to be used on devices based on processors: s5l8947x, s5l8950x, s5l8955x, s5l8960x, t8002, t8004, t8010, t8011 and t8015.

What’s Checkra1n?

Checkra1n is a semi-tethered jailbreak based on the checkm8 exploit. Basically, checkra1n developers gained access to execution of their code at the first stage of the iOS loading process (the same ability could be given by checkm8). As such, they changed the entire loading process so that after the device has loaded the investigator has root access to the file system and now can execute any unsigned code.

Installation (on macOS)

  • Download the needed MacOS version from the official website
  • Run the downloaded .dmg file by double-clicking on it
  • In the opened window, drag the checkra1n icon to Applications

Usage: GUI mode

To run and install checkra1n in GUI mode:

  • Open the Applications folder on the Mac
  • Right-click on the checkra1n icon and select Open from the drop-down list
  • Select open the program in a similar window
  • If the application does not open, run it again via a double-click
  • Connect the device, wait till it has been detected, and press Start

  • Click Next. The device will load in recovery mode
  • Click Start and put the device in DFU mode, following the instructions

  • If the device does not enter DFU mode, click Retry to try again

  • Wait till the installation has finished
  • If installed successfully, the investigator can access SSH via USB using 44 port.
  • After the installation is complete, the checkra1n application will be added to the device home screen. To install Cydia (unofficial AppStore), run checkra1n, click Cydia and install it.

Note: if the device has entered DFU mode and has stopped responding (blank black screen), or running log text has appeared on the device screen while patching system core, simultaneously press and hold side button and home button (or volume down) until the device restarts.

Usage: CLI mode

To run checkra1n in console mode, launch the Terminal application on the Mac and enter the following commands:

cd “/Applications/checkra1n.app/Contents/MacOS” 

./checkra1n_gui –

The console version of checkra1n will launch. Connect the device in DFU mode and the jailbreak will be installed automatically.

NOTE: Commands should be entered after dragging checkra1n.app to Applications folder on MacOS.

GUI and CLI modes: what’s the difference?

  • When running checkra1n in CLI mode, there is no verification of the device model and iOS version
  • According to our experience, all versions of checkra1n install on devices with iOS 13.2.3-13.3 in CLI mode.

Important differences between versions

  • When installing 0.9.6 and 0.9.7 checkra1n versions on devices with iOS 13.2.3-13.3, after reloading the device would be in USB restricted mode until unlocked
  • USB restricted mode does not allow checkra1n to finish its installation, SSH connection won’t work
  • A few times USB restricted mode switched on the devices with iOS 12.4 when installing checkra1n 0.9.7. It is yet unknown why this happened.
  • When installing earlier checkra1n versions (from 0.9 to 0.9.5), USB restricted mode does not switch on regardless of the iOS version. Thus, those checkra1n versions could be installed on devices without unlocking them and be used to access SSH connection.

Checkra1n traces

To remove the obvious traces of using checkra1n:

If Cydia wasn’t installed, restarting the device would be enough.

If Cydia was installed

  1. Open Checkra1n app on your device. Press Restore system. The device original file system would be restored.
  2. Technically, jailbreak was erased from the phone, but Cydia app is still present.
  3. Install checkra1n again without installing Cydia app.
  4. Connect iPhone to PC, open Terminal window and use the following command:

/usr/bin/ruby -e “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)”

  1. Then use this command:

brew install libimobiledevice

  1. Open a new Terminal window and use the command:

iproxy 2222 44

  1. Leave the Terminal window open. Press CMD+T keys to open a new tab and then use the command

ssh root@localhost -p 2222

NOTE: if you haven’t manually changed the password, it will be ‘alpine’.

  1. Enter yes and press Enter. Enter the following text in Terminal window and press Enter once again:

uicache –all

  1. The process would take some time. After it’s finished, enter the following command:

killall SpringBoard

  1. Restart the device to remove checkra1n app.

NOTE: that checkra1n icon might not disappear immediately after restarting the device.

After removing the visible traces of checkra1n, some checkra1n-related files might remain in the device file system. However, their directories would be inaccessible without a jailbreak.

Please note that starting from Oxygen Forensic Detective 12.2 Apple iOS devices with checkra1n are fully supported. 

Opinion: When Digital Forensics Vendors Hire Research Talent, Where Does It Leave Research?

$
0
0

by Christa Miller

In the second half of 2019, a set of hirings made some waves in the digital forensics community. First, in July, Cellebrite hired well-known SANS Senior Instructor Heather Mahalik. Then in August, Mike Williamson joined Jessica Hyde, Christopher Vance, and others at Magnet Forensics. In December, the set completed when BlackBag Technologies hired likewise well-known SANS Senior Instructor Sarah Edwards.

“Name” researchers going to work for vendors is nothing new, of course. Amber Schroader founded Paraben in 1999; Lee Reiber took over as Oxygen Forensics’ Chief Operations Officer in 2015, while Edwards’ transition to BlackBag put her in the already well-established research powerhouse of Vico Marziale and Joe Sylve.

Then Cellebrite acquired BlackBag, consolidating that powerhouse together with Mahalik and a formidable R&D team. With that, the talent acquisition process began to feel more like a research ring match, with Cellebrite and Magnet Forensics trying to knock each other’s blocks off. Where does that leave research itself?

The tradeoff between resources and profit

By now the industry takes it as a given that no one — neither vendors nor independent researchers — can keep up with the new devices, apps and app versions, or operating systems and their versions. Near-constant changes to these elements affect the way they store data, and in turn the way forensic tools acquire and parse that data.

Researchers need two things to attempt to keep pace: time and funding. It’s rare when research can be done for its own sake. Most is done in conjunction with casework or coursework and involves a specific device make / model, or a specific app.

Arguably, vendors are investing in research “deep work” that can ultimately make their tools stronger and serve a wider range of forensic examiners with highly relevant acquisition and analysis capabilities.

On the other hand, no private entity invests in anything without anticipating a return. Community goodwill is valuable, but only if it results in additional sales. Whereas independent research has always been about solving interesting problems and sharing the results with the community in the hopes that it will help, vendor tool development focuses on the most critical needs — identified by the community, yes, but prioritized by how frequent the feedback is.

That means the price of investing in “deep work” may be the kind of research that solves interesting problems. The concern isn’t so much that it will become the vendor’s intellectual property — Edwards’ APOLLO remains on GitHub in addition to being a BlackLight plugin, for instance — as it is the research’s focus. The really “interesting” problems may well be paywalled behind the labs that some vendors now run.

A closed, black-box competitive advantage

Again, proprietary research is undisputedly part of the business and has been for a long time. But the risks of too much concentration in the vendor realm were highlighted in 2011’s Digital Forensics with Open Source Tools, where Cory Altheide and Harlan Carvey wrote of their “experiences where proprietary forensic products have produced demonstrably erroneous results, but these tests were performed in a ‘black box’ scenario.”

This problem has persisted and accelerated, as new app or operating system versions are known to “break” both proprietary and open source tools — showing incorrectly parsed data, for example, that can lead to erroneous interpretations.

In those cases, vendors tend to patch quietly. Whether out of fear of embarrassment in a tightly competitive market, or out of broader admissibility implications, it’s difficult to say. (Imagine that a judge, misunderstanding how easy these kinds of errors are, could call into question every result from that brand, as if all the variables were monolithic.)

It isn’t as if researchers’ efforts will disappear behind a veil of secrecy, as numerous blogs, presentations, podcast episodes, and the like already show. On the other hand, their research, written up for official vendor blogs, wouldn’t necessarily include what a vendor wanted to hold back. Often, what goes unsaid is as important as what goes on the record.

One other risk of more competitive research: collaboration. Researchers who might once have freely shared with one another and built on each other’s research are less at liberty to do so now. “Coopetition” might allow a kind of uneasy truce-forging between vendors in the name of, say, a webinar or topical lecture at a conference. However, just as monopolies limit innovation by limiting competition, researchers may be less willing to share the results of their work if transparency means limiting their employers’ competitive advantage.

It might be compared to a vendor’s outright purchase of intellectual property that leaves no open source alternative. By trading source code for scale and an easy-to-use interface, the tool developer limits its visibility. The work that went into the original research could conceivably be replicated, but it would be unnecessarily duplicative — and besides, limiting examiners’ options for testing and validation when they are already pressed for time doesn’t serve justice.

Of course, just because the admissibility of digital evidence isn’t frequently challenged doesn’t mean it won’t be. While it’s understood that proprietary methods are protected during court proceedings, worth noting is that one of the foundations of admissibility is whether a theory or tool has been tested. Another is whether the results are reproducible and repeatable.

Experienced professional forensic examiners like Mahalik, Edwards, and others know this. That’s why it’s encouraging to see Mahalik, in her recent post, call for additional research to be shared with the community — a springboard of sorts. By using her platform to encourage this kind of work, she’s accomplishing two things:

  1. Promoting independent research and tool development ultimately enables examiners to validate the results of tools like UFED Physical Analyzer.
  2. Actually highlighting the research and independent analysts who conduct it.

The commoditization of relationships and the community

Part of the reason researchers got to where they are is their strong, consistently shared research. Blog posts, Twitter feeds, podcasts, and other media offer a transparent means of showcasing digital forensic research.

Open-source community projects notwithstanding, however, the concentration of so much top talent behind vendors’ closed doors effectively plays off one against another — and commoditizes the relationships they have with community members.

Certainly, researchers who have strong relationships with the community, as well as strong reputations for being hitherto vendor-neutral, stand a better chance than, say, an average product manager in asking for suggestions or feedback on trends and the tools themselves.

But communication is part speaking, part listening. Vendors are driven by business strategy, and their viewpoints can be affected by business goals and needs. Just as the bottom line drives R&D, these filters might inadvertently result in key insights or trends going unrealized, key content never being written or recorded, and so forth.

Certainly, responses to polls and questions on social media can benefit more than one vendor, as well as independent researchers. Likewise, examiners who request research from one vendor aren’t placed under an NDA that would stop them from asking the same of another vendor.

In fact, this might be considered a duty, particularly when using more than one tool. Asking multiple vendors for the same feature can only improve examiners’ ability to test and validate its results — assuming, again, that vendors choose to put their resources into that particular feature.

Towards stronger community-based research

It might be wise to redefine “research” to encourage more people to conduct it. Not everyone can write their own parsing scripts — or enjoys writing blog posts — but Edwards has stated that basic verification can be performed in as little as five minutes, and screenshots can accompany tweets or messages posted to the Forensic Focus forums, the digital forensics Discord server, and other venues.

Meanwhile, researchers like Josh Hickman are posting test images that the community — as well as vendors — can use to validate their tools, and populating test data for given apps can be gamified between researchers or even as a student project.

Another good-faith way for vendors and researchers to build community is by replicating the APOLLO approach, in much the same way Cellebrite and Belkasoft did when they recently included the Checkm8 solution in their tools. The original Checkra1n exploit remains free and available for testing to validate extraction results — on test devices, naturally — but the tools offer a forensically sound data acquisition method. (Bonus when the vendors are willing to give credit where it’s due to the researchers who created the tools.)

As for whether enough independent blogs exist to balance proprietary research, this might be a case in which less is assuredly not more. As independent researcher and blogger Alexis Brignoni observed in a recent blog post: 

“… as expected, a new OS version will break previously [known] good artifact parsers for both third party apps and native files. It is our job to figure out where the known but now lost items are as well as finding new artifacts we weren’t aware of. This is how toolmakers can focus effectively on what is needed to be done, by [us] doing the work and telling them it is important to us….

“As examiners we own the data we are tasked with processing and it is our responsibility to verify that any inferences gathered from it are exact and backed up by the source. We are uniquely positioned to identify gaps in knowledge, to work in filling them up, and sharing that knowledge with others that can automate the process to the benefit of the greater community of practitioners…. Your perspective is needed, your expertise is essential. Make it known.”

How To Acquire Cloud Data With MD-CLOUD

$
0
0

‘17.5 Zettabytes.’ This is the amount of data that the IDC estimates will be generated annually by 2025, and among those numbers, cloud traffic is expected to grow and reach 18.9 Zettabytes by 2021.

This tremendous amount of cloud data is generated and fueled in the course of building driver assistance and autonomous vehicle technologies; IoT devices including sensors in our bodies, homes, factories, and cities; high-resolution content for 360 video and augmented reality; and 5G communications globally.

As many digital forensic investigators are facing so-called ‘digital transformation’, finding evidence data from various cloud services is a highly demanding and important mission for digital forensic investigators. Cloud forensics is no longer optional but an essential solution, since many law enforcement professionals work on cases with devices with deleted data, which needs further investigation of the backup data. Plus, there is a tremendous and growing number of smartphones, IoT devices, automobiles and many more smart devices which store all their data only in cloud services.

This article is to introduce the cloud forensic solution of HancomWITH, a step-by-step guide about data extraction and data view using MD-CLOUD. Various cloud and email services are supported, and data stored in social networking services such as Twitter, Facebook and Tumblr can be extracted by MD-CLOUD.

MD-CLOUD Overview

Product Highlights

–         Supports extraction from global cloud services such as Google and iCloud

–         Supports extraction of Cloud-based IoT device data

–         Supports extraction from cloud services based in East Asia, such as Baidu and Naver Cloud

–         Authenticates via ID and password, two-factor authentication, Captcha, and token credential information found locally on smartphone images, such as iOS Keychain

–         Includes automated web scraping tool for recursively capturing public webpages

–         Provides automatic evidence-tagging feature for intuitive searching

–         Natively integrates with MD-RED

Key Features

Supports a wide variety of cloud services

Google, iCloud, Samsung Cloud, Naver Cloud, Evernote, One Drive, Baidu

Supports email extraction

POP3 and IMAP, as well as specific support for Gmail and Naver Mail

Supports extraction from social media services

Current support for Twitter and Tumblr, with Facebook support under active development

Specializes in East Asian cloud services

Baidu Cloud in China

Naver Cloud in South Korea

Acquisition of cloud-based IoT device data

IoT data extraction from AI Speakers and Smart Home equipment

Supports authentication via both public and unofficial APIs

Supports various authentication methods

ID and Password

Captcha image tests

Two-Factor Authentication messages

Credential data pulled from smartphone dump images (such as iOS Keychain)

Provides automated web capture feature

Automated web-crawler capable of recursively extracting from a target web page

Real-time extraction progress monitoring

Displays the progress of ongoing extraction jobs in real time, from zero to one hundred percent

User-friendly interface

Features a simple, intuitive, and effective user experience that warrants little training

Native MD-RED integration

Imports credential information found in suspect smartphone images that have been analyzed in MD-RED

Intuitive ‘Evidence Tagging’ based search feature

Automatically tags and categorizes data as it’s extracted from the cloud so that it can be quickly searched, grouped, and organized

Built-in data preview

Supports previewing any selected image, video, document, web page, email, and many more

Supports filtering by date range and file type

Allows users to limit the results of their analysis only to the time period and file types relevant to their case

Hash based data integrity assurance

Guarantees the integrity of the evidence data through powerful hash algorithms such as MD5 and SHA256

Report generation

Provides simple-yet-powerful report generation tool that supports both PDF and Excel formats

Below is a simple but useful guide on MD-CLOUD for those investigators who would like to maximize their digital forensic skills and be prepared for the cloud data tsunami.

1. Data extraction using ‘Credential information’

1.1 Create New Case

MD-CLOUD can access cloud services in several ways. Apecific services may ask the user to complete an additional verification process, such as a Captcha entry or two-factor authentication. To start new cloud data acquisition, select ‘New Case’ and set the case name and path. This time we’ll try using credential information.

1.2 Select service and proceed with data extraction

Various services such as cloud, email, SNS and IoT devices are supported by MD-CLOUD. These are displayed and categorized by type.

In this sample case we will try extracting data from Google. Select the Google icon on the left side of the screen, and then with the checkboxes, the user can perform selective data extraction. Date range and extraction type can be set before proceeding with the extraction process, then the resulted data will be collected on the extraction of filter conditions. Furthermore, even after the extraction is completed additional data sources can be added to the existing case without having to create a new case.

2. Data View: Contact/Event/Note/Email/SNS/Web Capture/Timeline Feed/Search View

2.1 Extraction Summary Dashboard

Once you start the extraction, a Summary View will appear and display the progress of ongoing extractions and some other miscellaneous information.

  • Timeline Chart: Displays the amount of data that has been extracted so far, relative to the dates associated with the extracted files (created / modified / uploaded time).
  • Tag Statistics: MD-CLOUD automatically categorizes extracted files using tags that are generated through file metadata. The statistics of the tags are displayed here.
  • List of Site: Summarizes the progress of extraction from data sources. It can be completely stopped by clicking on the stop icon.

2.2 Contact View

Displays contact information such as contact name, nickname, contact numbers, email address, address, profile, birthdays, etc.

2.3 Event View

 Event data such as birthdays, shopping, meetings, driving, celebrations, conferences, seminars, and other events.

2.4 Note View

Displays notes collected from Cloud services such as iCloud Notes, Evernote, etc.

2.5 Email View

Email View allows users to group and sort based on date, subject, ‘from’ field, credentials, etc. Email items can be searched using the inline search box.

2.6 SNS View

Posts, multimedia, files and other information extracted from social network services such as Twitter, Facebook, etc. are displayed here.

2.7 Web Capture View

Contents that have been extracted through data crawling on the provided links and their sublinks will be displayed in the Web (Web Capture) View. Multimedia, posts and other public contents can be extracted from some sites like Facebook, Instagram, LinkedIn or any other webpages. It displays the below information.

  • Link information: A list of extracted main links and their sub-links are displayed here.
  • Content View: Displays the content of the selected link.
  • Preview: Displays the overall look of the webpage.

2.8 Timeline Feed View

Displays the data from every category and arranges them by date (default), subject, content, type or credential.

2.9 Search View

When searching keys from anywhere in the entire application, those search keys are maintained in the Search View. Double-clicking on the search key, you can see a list of the search results.

3. Generate Report: Case Info/Options/Layout

After the data extraction, the user can generate a PDF report of the case, which will display all the information of the extracted files and thumbnails of multimedia data. Below we have attached a screenshot of an extraction report for Google Home.

The call for MD-CLOUD will gradually increase as it has great practical value and importance as a data acquisition tool that can investigate mobile data backup and new data stored only in cloud storage. Our effort to add various data extraction sources and product advancement on MD-CLOUD will continue.

If you are interested in cloud forensics and want to learn more about MD-CLOUD, please check the product specification at the below link, and reach our team via forensics_sales@hancom.com.

HancomWITH Product Brochure – MD-CLOUD

How To Extract Cloud Data Using Oxygen Forensic Detective’s Cloud Extractor

$
0
0

Welcome to Oxygen Forensic Detective’s knowledge nuggets. In this video, I will show you how simple it is to extract cloud data using Detective’s Cloud Extractor. If you weren’t already aware, Oxygen Forensic Detective has a lot more to it than just extracting and parsing cell phones. Our Cloud Extractor is included, meaning if you own a license for Detective, you have Cloud Extractor. 

There are two ways to enter into the Cloud Extractor. One is after you extract a device and you view the accounts and passwords section at the top of the screen, you will find the Cloud Extractor. If you access through here, all accounts with usernames, passwords, and tokens will automatically populate into the Extractor. The other location of your Cloud Extractor is on your home screen, under ‘extract’. 

Let’s say that you have a witness or complainant that walks into your office and gives you consent to offer up their account information and data to help your case, but they don’t want to give up their cell phone. Do you take pictures or screenshots of their application information inside of the device and ask them to send it to you, or ask them to download their information directly and send it to you? The easy answer here is use the Cloud Extractor. If they’re giving you their account information and their password, we can enter that information into the extractor.

Here we can see we have several ways to retrieve information. In the previous scenario where you already have the account credentials, you could simply start a new extraction, enter the credentials, and begin the download process, which we’ll do in a second. 

The next option is to import credentials package. This package is generated by Detective or Key Scout and imported here. 

Your third option down is to use an iCloud token from a Windows PC, and here is where you can download Key Scout, which is an on the go tool that captures tokens and passwords from a computer and creates the package mentioned above. This is also included with your Detective license and here you can [extract] WhatsApp backup files from an SD card or an Android device. Now let’s start our new extraction.

Let’s give our case a name and begin. Now let’s add our credentials to each application we need to extract from a cloud. Let’s say we have permission to extract Facebook, Twitter, and all accounts associated with Google. Let’s enter our Facebook credentials, our Twitter – add credentials – and all Google accounts, so we’ll go to the Google services and select all.

Now we’re not sure which services are currently being used, but we have permission to gain access to any of them. So here I’m going to try to access all of them and apply here we can see what services we’re looking at, the credentials and the successes or failures.

Let’s try a new password that they gave us that it could be. There we go. 

All right, click your next button. Here we can see what categories have been exported and we can put a date range on it if we need to. 

Now the downloading process begins. 

Now that the downloads have completed, let’s click next. From this point, we can actually open this information in Detective and view it parsed and here we are inside of our cloud backup. All of our information is here and you can see all applications, any data came from, for more information on Oxygen Dorensic Detective, and for training opportunities, please contact us.

How To Decrypt WhatsApp Messages With Oxygen Forensic Detective

$
0
0

Welcome to Oxygen Forensic Detective’s Knowledge Nuggets. In this video we’re going to discuss decrypting WhatsApp messaging.

Let’s go over a few very important points that you need to consider before analyzing WhatsApp. 

Number one: always place the device in airplane mode. This is important for many reasons, but the reason specific to WhatsApp is [that] during the extraction of WhatsApp, iCloud backup or Google Drive backup or the WhatsApp cloud, entering the phone verification code will disable the previous WhatsApp installation. The application on the device will then lose its verified status.

Number two: pay close attention to the last date WhatsApp was used. If the extraction is attempted after 45 days from the last time [the] WhatsApp account was online, all of the account data will automatically be wiped from servers and the account will be removed from all group chats it participated in.

Number three: disable the two-step verification if you can. 

Number four: decryption and the sender. WhatsApp cloud relies on the original sender’s devices to decrypt messages with end-to-end encryption. Thus, some messages may take hours or days to be decrypted, so you may want to do an additional extraction a few days from the first time you did it. 

Number five: always check for backups. If you receive a device with WhatsApp, but there’s no data, keep looking, your suspect may have deleted what was visible to you, but there may be backups. So check for Google backups, iCloud backups and the WhatsApp cloud backup itself.

The first way to extract data from WhatsApp is to do so by extracting the device with Detective. If the device with WhatsApp can be unlocked, check the status of the two-step verification, look in the settings and disable it if it’s enabled. 

If the device with WhatsApp can be unlocked, check if the messages are intact. If data was not wiped by the device owner, make a new cloud backup of the account manually. Switch the device to airplane mode, and make a record of the current date and time. 

If the device acquisition is possible, extract WhatsApp data directly from the device. 

Let’s say I’ve gone ahead and backed up my WhatsApp messages. My next step would be to extract the device. So let’s look and see what the results look like. 

We have WhatsApp Business, Messenger, and we already have some backups in this device. Here we can see account information, all contacts, our chats, any group chats, and additional information. 

Now let’s use our Cloud Extractor to extract additional data. Navigate to your backups, msgmessagestore.db. This is a database containing your messages and it’s encrypted, so you’ll need your token.

We can use our phone number or the cloud token. If you don’t have the token, use the phone number, but remember when you do this, you will be logged out of the WhatsApp account on the device, so make sure you have that extraction first. Keep in mind you’re going to need to turn airplane mode off so you can get the SMS from WhatsApp. 

Since we have selected the authentication type to be with SMS, it will send us a code or it can make a phone call. Here we can see, since I have made this request too many times today, I’ll have to go back and request the phone call. And I’ve just received notification that I’ve been logged out of my WhatsApp account on the device itself. 

Now we can open the WhatsApp backup and Oxygen Forensic Detective, and see how it parsed. Now that this database has been decrypted and parsed, you can see the information that’s available.

Now that we’ve decrypted some backup files, and you see just how easy it is to grab a token or authenticate through SMS, let’s walk through a new extraction. Here we’re going to use the cloud extractor to pull down all backups possible. 

Let’s go into the device first. Find all the credentials that we can, parsed from the device, and we’re going to input them here. Click on ‘New extraction’ and choose the WhatsApp application.

We’re going to select all and start with our username and password.

Next we’ll upload a key file. This key file will authenticate your WhatsApp Google backup. You can find this file in the following file path in an Android. If you’ve extracted the device information and you’ve navigated in the file section to find this key, save it out to your desktop so it’s easy to find, and import it here.

Put the phone number in. Remember that once it authenticates here with the phone number, you will no longer be able to access the account within the device. Meaning, if you authenticate using the phone number first, you will be kicked out of the device before you can authenticate with the QR code, and vice versa. And if you have a token, place it here.

Once this carries over to Detective, you can now see your entire WhatsApp account parsed. For more information on Oxygen Forensic Detective and for training opportunities, please contact us.


How To Collect Data Using MacQuisition Live

$
0
0

As more employees are required to work from home, we’ve heard from our customers that they need the ability to remotely collect data from Mac systems without having to send MacQuisition hardware to someone’s home. In order to help our customers in this unique time, BlackBag is making a new software only option available to MacQuisition customers for a limited time.  

Below we’ve answered some common questions about this new functionality. 

In addition, below is an easy to use how-to guide for the person running the application and completing the collection.

Using MacQuisition Live

So, you’ve downloaded MacQuisition Live, let’s take a look at some ways you can use it.

MacQuisition Live provides a mechanism to collect data from remote users in one of the following ways:

  • Provide the MacQuisition Live dmg and license information to the person who needs to complete the collection and they can run it live on any Mac that needs files extracted.
  • The examiner can drive the collection connecting to the Mac remotely to run the MacQuisition software.  There are several built in options on the Mac to allow remote access, for instance Mac Remote Access or Mac Screen Share, or commercial remote access tools.  For more information on remote access of Mac systems there are helpful suggestion in this article.

Once the data is collected on the macOS system, the collection can be transferred via a cloud storage solution such as Dropbox or email.  We recommend storing data collected in the logical evidence file format which preserves key file metadata.

Things To Keep In Mind

If you are having the user of device collect data, specific instructions must be provided.  The scope of the collection should be clearly defined in the instructions sent to the user. Our triage mode allows you to browse file content or search for files based on location, filename, extension, file size, dates, and keywords.  MacQuisition also has built-in collection options available in the Collection tab.

In addition to MacQuisition Live, a license file is required.  The license file will be saved to any system MacQuisition Live is run on.  Both of these must be provided for the user to run the application on their system.

A plan must be in place to transfer the collected data from the device the data was collected on to the people analyzing the collected data.

Possible Uses

MacQuisition Live provides a mechanism for eDiscovery data collections, collections related to HR requests, or even to find files that correlate to indicators of compromise when a threat is detected.  Let’s walk through how to run MacQuisition Live and then one collection scenario. This scenario can be used as a template for creating a set of instructions for data collection.

How To Run MacQuisition Live

MacQuisition Live is stored in MacQuisition_2020R1.dmg.  Open the dmg on the macOS system data will collected from.  A Finder window appears showing the MacQuisition Live application.

Double-click on MacQuisition.  The following dialog box appears:

The User Name box contains the user account user name.  Type in your login password in the Password box. Click Install Helper.  

The following dialog box will appear:

Click Enter License Key.  In the window that appears either manually enter the license information or if a license file has been provided click Load from File.

Note:  You cannot copy and paste the license file information.  It must either be manually typed in or loaded from a license file.

Once the license information is entered or loaded from a file, click Enter License.  

The MacQuisition EULA window will appear.  Click Agree

The following warning dialog box may appear:

Click OK.

MacQuisition Live is now running on the system.

Collecting Data

This section provides an instruction sample for collecting data that can be sent to users performing the data collection.  These instructions should be customized for your collection needs before they are sent. Keep in mind the level of expertise of the collector when creating your own data collection instructions.  The instructions should be tested by someone with data collection experience before they are distributed to users who are less familiar with data collection processes. Also remember running MacQuisition Live will create changes on the system.   At the end of this example, possible variations that you can use to customize these instructions are provided.

Example 1 – Collecting Data Based on Keyword

In this example we are going to search for files related to the flamingo project and the octopus project.  Specifically, we are looking for documents used on these projects.  The target for the collection is the user’s Documents folder.  

Step 1 – In MacQuisition, click on the Collection tab.  Right-click on the left side of the collection tab and choose Deselect All.

Step 2 – Click on the Search tab.

Step 3 – Use the Location drop-down menu and select your Documents directory.

Step 4 – In the Content section type the keyword “flamingo” and check the Search Documents check box.  Click Search.

Step 5 – The results returned are displayed in the middle window.  Highlight all of the files in the middle window, right-click and choose Add selected Items to Collection.

Step 6 – Repeat steps 4 and 5 using the second keyword “octopus.”

Step 7 – Click on the Collection tab.  The files added to the collection are displayed in the ADDITIONAL FILES section.  The total size of the collection is also listed.

Step 8 – Choose a location for the data collection by clicking Set….  In the Select Destination Volume Window, choose the data volume of the device and click Open.  In this example, the data volume is named MacSSD – Data

A Finder window appears.  Navigate to Desktop folder of your user profile.  MacSSD/Users/<username>/Desktop. Click Open.  The path to your Desktop appears in Destination.

Step 9 – From the drop-down menus select .L01 for Format, and 2GB for Segment Size.  Uncheck SHA1.  Click Start.

The Activity window appears showing the status of the collection.  Once the collection completes, the Finished acquiring data message appears with the collection storage path.

Step 10 – Close MacQuisition.  In Finder navigate to the collection folder.  Email the entire collection folder to thisperson@somecompany.com.  

Collection Variations

MacQuisition Live has a myriad of other features that can be used for data collection, so depending on what you are trying to collect, the above instructions can be altered fit your collection requirements.  

In the Search tab Data can be searched for by Location, Name, Extension, File Size, Date(s), and Contents (keyword).  You can search for multiple file extension at the same time by separating the file extension with a colon.  For example, pdf:png:doc.

The Browser tab can be used to navigate to specific file path to add items to a collection.  

The Collection tab has pre-defined sets of information that you can choose for collection. 

Refer to the MacQuisition User Guide to read more about Live data collection options. 

One of the most important steps to refine is Step 10.  Keep in mind the amount of data that may be in the collection.  Send large collections by email may not be feasible. Transferring collections via a cloud storage solution such as Dropbox may be a more appropriate option.   

If you have any other question or issues, search the BlackBag support portal https://support.blackbagtech.com or reach out to tech support via email support@blackbagtech.com.

Industry Roundup: Online Digital Forensics Training

$
0
0

by Christa Miller, Forensic Focus 

Online digital forensics training has been around for a number of years, offered as a convenient alternative to in-person training for examiners who couldn’t travel or were otherwise resource-constrained. In an unforeseen twist, though, live and on-demand remote training has become critically necessary to professional development during the COVID-19 pandemic.

Each model has its strengths. Live, instructor-led online classes are interactive, offering a virtualized classroom experience as students communicate with both instructors and fellow students. However, on-demand training offers the benefit of going at your own pace and according to your own schedule.

When you’re evaluating online training, a few other things to keep in mind include time zones, language requirements, whether the class can be bundled or applied to a vendor’s annual pass, and whether the class counts toward certification training or exams. For interactive classes, also consider whether you have or will need a second monitor to interact with both course material and your instructor.

In this article, we round up many of these opportunities.

AccessData

AccessData Digital Investigations Training includes live online classes designed to guide examiners through the use of AccessData’s Forensic Toolkit® (FTK), AD Enterprise and AD Lab technologies in identifying, responding, investigating, prosecuting and adjudicating cases.

The company is currently in the process of changing live in person classes to live online. For the United Kingdom, all in-person classes through the end of May have been switched to live online. Learn more about the available course options and register at AccessData’s website.  

Amped

Amped recently announced via its blog that it is standing up live online training classes with its experienced trainers. The first online classes have been added to Amped’s training schedule, and more are slated to be added. Visit Amped’s website for more information and to register.

Autopsy 

Until May 15, 2020, free online training in the use of open-source Autopsy forensics software is available to anyone. Previously – and again after May 15 – the training was open only to local, state, and federal US law enforcement personnel. A completion certificate can be used towards CPE credits if desired. Register with any email address to receive the coupon code to register for the course. Learn more here.

BlackBag Technologies

BlackBag’s new live, instructor-led virtual training offers its three most requested courses, Basic Forensic Investigations, Windows Forensic Investigations, and Apple Forensic Investigations, throughout April and May.

In addition, new live virtual forensic workshops highlight product features and digital investigation techniques for Mac and Windows, and Mobilyze Tool Training is currently available on-demand as a free, self-paced online course.

Over the coming weeks, BlackBag plans to release more dates and time zones for live, virtual courses in the coming weeks as well as in-demand, self-paced training courses. Visit BlackBag’s training website to learn more and register.  

Belkasoft

Belkasoft’s four training courses — Belkasoft Essentials, Belkasoft Advanced, Belkasoft Certification Training, and Incident Response Examination — are all available online. Between them, these courses offer skills in both digital forensics and the use of Belkasoft Evidence Center, covering the acquisition and analysis of data artifacts from hard drives, smartphones and cloud, and other data sources such as memory and virtual machines.

Each module is accompanied by a set of practical exercises, with the opportunity for Q&A during the training session. Learn more at belkasoft.com/training

Cellebrite

The Cellebrite Academy offers one of the most extensive sets of online courses in both live online and self-paced formats. Delivered in 10 different languages, the courses are bundled to make it possible for students to work toward a certification. The Cellebrite Reader online class is free.

Cellebrite’s live online classes are limited to 16 students per class, and groups of five or more, are eligible to receive 15% off on all registrants. Learn more and register on Cellebrite’s website.

International Society of Forensic Computer Examiners (ISFCE)

The ISFCE’s Computer Forensic Training Center Online offers a self-paced, eight-module Guided Self Study CCE Bootcamp. Students receive course material handouts, along with pre-prepared USB thumb drives, mobile phones and a hard disk drive for examination.

With instructor support provided via email, the Self-Study CCE Bootcamp covers tool-agnostic core forensic practices with an eye toward evidentiary admissibility at trial and qualifying as an expert witness. Course eligibility is open to both civilian and law enforcement digital forensics professionals, along with those who work in IT, accounting, legal and other fields.

Magnet Forensics

Magnet Forensics has offered virtual instructor-led courses for years, but recently increased its virtual machines’ capacity and transitioned its classroom instructor-led courses online in both instructor-led and self-paced formats.

The training encompasses both tool- and platform (mobile and cloud)-oriented courses, as well as a dedicated incident response course. 

MSAB

MSAB’s online, on-demand training course options include its XRY Certification and Recertification courses in mobile forensics using its XRY software. Other online courses include certifications for XRY Express Logical and Physical certifications, as well as XAMN Spotlight, which is offered with both on-demand and instructor-led options.

For customers who had registered for classroom training, MSAB has waived its 14-day notification requirement until the end of May as long as participants reschedule the training for a later date before October 31, 2020 (subject to pandemic status).

National White Collar Crime Center (NW3C)

The NW3C’s training encompasses a range of courses for digital forensics examiners, cybercrime investigators, and prosecutors, with the result that professionals can cross-train to learn about multiple issues they may end up supporting.

Having suspended its classroom training through the end of April 2020, the NW3C is pivoting to deliver previously canceled classes through its virtual platform. It’s also stepping up its robust webinar program, offering seats to up to 3,000 participants two to three times per week. (Want to offer webinar content? Email topics@nw3c.org!)

Finally, the NW3C began offering Capture the Flag competitions in March. Daily challenges offer examiners from all experience levels a way to keep their skills up using open source tools. Thanks to sponsors, prizes will be offered for CTF winners throughout April through mid-May. Get a sense for what the challenges look like on NW3C’s YouTube page, and sign up at nw3.ctfd.io.

Nuix

The Nuix Workstation Forensic Practitioner Core, Nuix Workstation Forensic Practitioner Foundations, and Nuix Workstation Forensic Practitioner Windows certification courses, as well as the Nuix Investigations End User and Administrator courses, are all available live online, and along with their respective exams count toward Nuix Forensic Practitioner Master Certification. Classes may be purchased as a bundle at a discounted price. 

In addition, Nuix is expanding its webinar schedule throughout 2020. Look for a brand-new Nuix Investigate Power User webinar series coming this year, along with thought leadership panels and demos. Visit nuix.com/nuix-at-home for the latest content, or nuix.com/webinars for a full listing of webinars.

OpenText

OpenText Adoption & Learning Services offers self-paced online learning programs through its eLearning portal. It offers an annual training passports at a single flat, discounted rate for all available EnCase Training OnDemand courses — including OpenText EnCase Forensic, OpenText EnCase Mobile Investigator, OpenText Endpoint Investigator, and others — over a one- or two-year period. Certifications are available as well.

OSForensics

Three online options are available from PassMark’s OSForensics:

  • Self-directed online learning combines slide decks, videos, and lab exercises that enable OSForensics users to build proficiency with the software.
  • A separate online certification exam is available for those who wish to become an OSForensics Certified Examiner (OSFCE).
  • In addition, first responders and new digital forensic practitioners can take the OSForensics Certified Triage Certification test completely free of charge.

Oxygen Forensics

With free remote training on the Oxygen Forensic® Detective interface, Oxygen is offering a complimentary introduction to its remote training platform. Over a two-hour session led by a live Oxygen instructor, attendees log into Oxygen’s online education environment to learn more about the software and how to implement it. Sessions are restricted to 10 attendees.

Oxygen additionally offers three- and five-day remote advanced training on its platform via its Forensic Boot Camp and Accelerated Training Program for a fee. 

Paraben

Paraben’s operator-level courses for mobile and computer each come with an online training, lab, and certification examination. Paraben currently offers the following courses online as of April 2020:

  • A new free online training course focuses on the functions in the free version of the E3 Forensic Platform, which is provided to all attendees.
  • Going further, the E3 Fast Track gets into the details of the Paraben E3 Forensic Platform and how it can be used with a large variety of different digital data types including computer data, email data, internet data, smartphones, and cloud information.
  • The tool-agnostic Forensic Fundamentals course teaches novice examiners about getting started in digital forensics: chain of custody, collection, acquisition, analysis, and reporting on data.

Passware

Passware Certified Examiner (PCE) Training is an online, self-paced course that provides forensic examiners with a start-to-finish education on the use of Passware Kit Forensic to detect, analyze, and decrypt encrypted electronic evidence.

More than 15, 15-30 minute videos show attendees how to detect encrypted evidence, recover passwords for all common file types, analyze memory images, recover passwords for mobile backups, decrypt hard drives. An optional exam offers attendees the ability to test for a Passware Certified Examiner (PCE) certification.

Learn more at passware.com/training.

SANS Institute

SANS offers a number of free resources for digital forensics examiners. In particular, new weekly Capture-the-Flag events and NetWars challenges are available in a virtual online platform now through May 31st. For training, free one-hour course previews of more than 45 SANS courses, many involving digital forensics, are available through its OnDemand platform.  

Spyder Forensics

Training is Spyder Forensics’ focus, and its Remote Synchronous Training (RST) is fully interactive, designed to simulate a live classroom environment with structured – and customizable – courses.

In addition, Spyder Forensics’ Learning Management System (LMS) provides on-demand interactive delivery of nearly all training material. Students are assigned the appropriate course and module, or group of modules based on a customized learning path by role, skill level, or content. 

Also recently introduced is Forensic Fridays, a series of complimentary 1 hour knowledge exchange sessions on a wide range of topics. To learn more, visit www.spyderforensics.com.

Teel Technologies

Another training-oriented provider, Teel Technologies has recently introduced a weekly educational live broadcast straight from its instructors. These short, 15-20 minute lessons on topics like Faraday bags, SQLite Explorer, and flasher boxes, offer a brief presentation, followed by a Q+A and discussion.

As well, online training is available. The first class to be offered will be SQLite Forensics, a five-day course that teaches law enforcement mobile forensics examiners how to perform low-level analysis and recovery on SQLite databases. 

X-Ways

X-Ways’ online live training includes the four-day X-Ways Forensics and the two-day X-Ways Forensics II, though additional training may be offered with sufficient demand and filled seats. Hands-on exercises with sample image files help class attendees to implement their new skills immediately, and temporary licenses are available to take the class.

While only the four-day class is required to attempt X-PERT certification, the two courses together are recommended. 

Viewing all 350 articles
Browse latest View live