@Pytt4m There is TimeLine Explorer that is a nice spreadsheet viewer. Plaso also has the ability to export directly into elastic and then you can build Kabana dashboards.
#psort.sy -o elastic --server 127.0.0.1 --port 9200 --index_name mywebserver web.plaso.
This is probably the best way to visualize this data as you can filter/graph on any field and lots of options.
I typically go old school commandline and very surgical. If I'm full timelining something then I already have an idea of when. Then I'll do something like this...
#grep <date time> timeline.csv |cut -d ',' -f1,3,5,7,8,12 |grep -v '.a..'
In the above command:
Grep filters the date and time you want from the file. Use the format the file uses. "2021-03-12 15:00"
cut -d says use the delimiter comma
cut -f grab fields 1,3,5,7,8,12 ( i just made these up for the example)
Grep -v do not show file access times which is typically shown as .a... This also depends on what I'm investigating though, but initial triage I remove them.
I do this because of speed. Greping files is usually very fast and I dont have to setup anything additional to process the files. If the file is huge, I'll break out the couple days I'm interested in to make it faster.
#grep '2023-01-23' timeline.csv >2023-01-23-timeline.csv
If I need to do a graphical timeline for a report, I'll use Aurora and manually create it. Hope this helps!
#DFIR #Linux #incidentresponse