How to backup browser state after Watir automation

Summary of tools:
 watir-webdriver 1.8.17
 Mac OS X 10.7.3
 Chrome 18.0.1025.151

I'm currently using Watir WebDriver to automate Chrome sessions across a number of websites. I need to backup the state of the web browser (cookies, cache, etc.) at certain points throughout the session. Originally, I figured I could do this with Ruby's file IO library by copying ~/Library/Application Support/Google/Chrome/Default at the necessary points. However, it does not appear that Chrome sessions created with Watir WebDriver store the needed information in this default location. How can I locate this data to back it up? Is this information stored elsewhere? Is there something other than Watir that would make this easier?


I finally have a solution!

It appears that watir-webdriver stores the browser state/user data in random path. By default this can be found here (where XXXXXX is the random identifier):


Instead of relying on this default and randomized path, you can specify a precise location for the user data using the following flag: :chrome, :switches => %w[--user-data-dir=/path/to/user/data]

Then the cache, cookies, etc. can be backed up, deleted, etc. using Ruby's standard library. Hopefully this helps someone else.

Edit: If you are unable to find where watir-webdriver is storing your user data by default, find Chrome's process id by running watir-webdriver and top. Once you have the pid, type lsof -p <pid> into terminal to find the path to the user data.

Another thing I like to do is serialize(save) the Watir::Browser object into a file using YAML, like so:

require "yaml""browserObj.yaml", 'w').write YAML::dump(@browser)

This browserObj.yaml file will then contain all sorts of internal details in easily readable/parseable text, including PID of whichever browser, path to temp profile, etc. Eg.

profile_dir: /tmp/webdriver-rb-profilecopy20121201-1981-9o9t9a

Need Your Help

Writing to separate columns instead of comma seperated for csv files in scrapy

python csv scrapy

I am working with scrapy and writing the data fetched from web pages in to CSV files

what does sed 's#/text##' mean?

regex unix sed solaris-10

I've just encountered this sed expression and could not figure out exactly what it means. I've performed a google search previously. I'd appreciate any help. What does the initial # and the final #...