Mochitest 编辑

Mochitest

Mochitest is an automated testing framework built on top of the MochiKit JavaScript libraries. It is an automated regression testing framework used by Mozilla to report success or failure to the test harness using JavaScript function calls.

Mochitest's use of JavaScript function calls to communicate test success or failure can be unsuitable for certain types of test. Only things that can be tested using JavaScript (with chrome privileges!) can be tested with this framework. Given some creativity, that's actually much more than you might first think, but it's not possible to write Mochitest tests to directly test a non-scripted C++ component, for example.  (Use a compiled-code test to do that.)

Running tests

The Mozilla build machines run Mochitest as part of the build and test process, so we get to know pretty quickly if someone commits a change to the source code that breaks something. However, you should still run Mochitest yourself before you commit any new code. You don't want to be the one who wastes everyone's time by breaking the tree if you can help it. :-)

Running the whole test suite

To run Mochitest, build Mozilla with your changes; then run

./mach mochitest

Image:Mochitest.png

Note: Keep focus on the browser window while the tests are being run, as some may fail otherwise (like the one for bug 330705 for example). Linux users can avoid this inconvenience by using a dummy X server (see Diverting X output below).

This runs every flavor and subsuite (see Running flavors and subsuites below) of Mochitest, which will take a very long time. Often it is not necessary to run the entire suite locally as the tests require focus. A good compromise is to run tests that you suspect are affected by the changes you made, such as tests in the same directory as the code you touched, locally. If that passes, you can then run the full suite of tests if you think it is necessary.

Running select tests

To run a single test (perhaps a new test you just added) or a subset of the entire Mochitest suite, pass a path parameter to the mach command. For example, to run only the test test_CrossSiteXHR.html in the Mozilla source tree, you would run this command:

./mach mochitest dom/security/test/cors/test_CrossSiteXHR.html

To run all the tests in dom/svg/, this command would work:

./mach mochitest dom/svg/

Note: The path specified is the path to the test or directory within the Mozilla source tree. If the path is a directory, the tests in that directory and all of its subdirectories are loaded.

Running flavors and subsuites

Flavors are variations of the default configuration used to run Mochitest. For example, a flavor might have a slightly different set of prefs set for it, a custom extension installed or even run in a completely different scope. The Mochitest flavors are:

  • plain - The most basic and common Mochitest. They run in content scope, but can access certain whitelisted privleged APIs with SpecialPowers.
  • browser - These often test the browser UI itself and run in chrome scope. More information can be found here.
  • chrome - These run in chrome scope and are typically used for testing privileged JavaScript APIs. More information can be found here.
  • a11y - These test the accessibility interfaces. They can be found under the top 'accessible' directory.
  • jetpack-addon- Test jetpack addon (more info needed).
  • jetpack-package - Test jetpack package (more info needed).

A subsuite is similar to a flavor, except that it has an identical configuration. It is just logically separated from the "default" subsuite for display purposes. For example, devtools is a subsuite of the browser flavor. There is no difference in how these two jobs are run. It exists so that the devtools team can easily see and run their tests.

Note: There are also tags, which are similar to Subsuites. Although they both are used to logically group related sets of tests, they behave differently. For example, applying a subsuite to a test removes that test from the default set, whereas, a tag  does not remove it.

By default, mach finds and runs every test in the given subdirectory (or topscrdir if none specified) no matter which flavor or subsuite it belongs to. But sometimes, you might only want to run a specific flavor or subsuite. This can be accomplished using the --flavor (or -f) and --subsuite options respectively. For example:

./mach mochitest -f plain                        # runs all plain tests
./mach mochitest -f browser --subsuite devtools  # runs all browser tests in the devtools subsuite
./mach mochitest -f chrome dom/indexedDB         # runs all chrome tests in the dom/indexedDB subdirectory

In many cases, it won't be necessary to filter by flavor or subsuite as running specific directories will do it implicitly. For example running:

./mach mochitest browser/devtools

is a rough equivalent to running the devtools subsuite. There might be situations where you might want to run tests that don't belong to any subsuite. To do this, use:

./mach mochitest --subsuite default

Note: Before June 1st 2015 each flavor had its own specific mach command (e.g mach mochitest-plain). These have been removed and replaced by --flavor.

Running specific chunks

The entire Mochitest suite takes a long time, so to get results in automation faster, they are split up into groups called "chunks". On desktop, mochitest is grouped into 5 different chunks. You can easily run mochitest-1 through mochitest-5 like so:

./mach test mochitest-1

On other platforms, Mochitest can be grouped in to more or fewer chunks. To mimic this, you can pass in --total-chunks and --this-chunk. For example, if the platform has 9 chunks and you want to run the 3rd one:

./mach mochitest -f plain --total-chunks 9 --this-chunk 3

Running tests on comm-central (Thunderbird, SeaMonkey)

Currently (as of August 2013) you cannot use mach to run tests in comm-central applications. You need to run the "old" pymake commands inside the obj-dir (if you use one). See pymake for how to use pymake on Windows. For example, to run only the test test_CrossSiteXHR.html in the Mozilla source tree, you would run this command:

TEST_PATH=content/base/test/test_CrossSiteXHR.html make -C $(OBJDIR) mochitest-plain

Note: mochitest-plain is an example. There are multiple categories of mochitests. Depending on the test you want to run, use either mochitest-plain, mochitest-chrome, mochitest-browser-chrome, mochitest-a11y or mochitest-ipcplugins.

To run all the tests in https://dxr.mozilla.org/comm-central/source/testing/mochitest, use this command:

TEST_PATH=suite/common/ make -C $(OBJDIR) mochitest-chrome

Due to bug 907434, you should not specify any objdir in your .mozconfig. Use the default one instead to be able to run tests! If you don't like the default choice, use symbolic links (also works on Windows, see the bug for details) to redirect the objdir to another folder.

Running mochitests under Electrolysis (e10s) for Fennec

To run your Mochitest in a child tab process, run it in a tree that's built for fennec, i.e. was configured with

ac_add_options --enable-application=mobile

This works fennec builds on Linux, and also Windows and Android builds.  OSX may vary depending on the state of our e10s implementation there.

On Mac, it is necessary to patch automation.py to remove the unconditional "-bin" appended to the appname, and run mochitests with the following setting:

EXTRA_TEST_ARGS="--appname=dist/FennecDebug.app/Contents/MacOS/fennec"

Running mochitests without Electrolysis (e10s) for desktop builds.

Mochitests run with electrolysis by default. But you can disable it by passing in the --disable-e10s option to mach's mochitest command (eg, ./mach mochitest -f browser --disable-e10s ...).  This will cause the profile created for testing to have the preference, browser.tabs.remote set to false before starting the test suite.

Running and debugging mochitests on Android

Instructions on how to run Mochitests in Firefox Mobile (Fennec) on Android are available on wiki.mozilla.org, as are instruction on how to use remote debugging to debug Mochitests on Firefox Mobile (Fennec) on Android.

Running failed tests

After running mochitests, you might have some errors. If you do, you can run make mochitest-plain-rerun-failures to run just the tests that failed:

make mochitest-plain
make mochitest-plain-rerun-failures

You could also maintain the failures.json file at the root of the mochitest directory ($(objdir)/_tests/testing/mochitest) during the first run.  When rerunning the failures, it uses the --run-only-tests flag to run the tests in the failures.json file.

Note: This functionality was broken in mach, bug 1155231 tracks re-implenting it.

Debugging individual tests

If you need to debug an individual test, you could reload the page containing the test with the debugger attached.  If attaching a debugger before the problem shows up is hard (for example, if the browser crashes as the test is loading), you can specify a debugger when you run mochitest:

./mach mochitest --debugger=gdb ...

See also the --debugger-args and --debugger-interactive arguments.

You can also use the --jsdebugger argument to debug JavaScript.

Finding errors

Search for the string "TEST-UNEXPECTED-FAIL" to find unexpected failures. You can also search for "SimpleTest FINISHED" to see the final test summary. This is particularly useful when viewing full Tinderbox logs, since the Mochitest output isn't necessarily at the end of the combined log.

Logging results

The output from a test run can be sent to the console and/or a file (by default the results are only displayed in the browser). There are several levels of detail to choose from. The levels are DEBUG, INFO, WARNING, ERROR and CRITICAL, where DEBUG produces the highest detail (everything), and CRITICAL produces the least.

Mochitest uses structured logging. This means that you can use a set of command line arguments to configure the log output. To log to stdout using the mach formatter and log to a file in JSON format, you can use --log-mach=- --log-raw=mochitest.log. By default the file logging level for all your formatters is INFO but you can change this using --log-mach-level=LEVEL.

To turn on logging to the console use --console-level=LEVEL.

For example, to log test run output with the default (tbpl) formatter to the file ~/mochitest.log at DEBUG level detail you would use:

./mach mochitest --log-tbpl=~/mochitest.log --log-tbpl-level=DEBUG

Diverting X output

The tests must run in a focused window, which effectively prevents any other user activity on the engaged computer. Linux users can reclaim their boxes by telling the suite to use a hidden virtual desktop. If xvfb is or can be installed, the following command launches the tests without blocking the active session:

nice xvfb-run mach mochitest ...

Other possible configurations have also been discussed in bug 434365.

Running tests without mach

You can also run tests without a build environment or mach.  The easiest way to do this is to download the tests.zip file corresponding to your platform, and extract it. Assuming that you have both pip and virtualenv installed, the following commands should help you get started.

cd $tests_zip
virtualenv venv # create your virtualenv
source venv/bin/activate # activate your virtualenv
cd config
pip install -r mozbase_requirements.txt
cd ../mochitest
python runtests.py --help

Other runtests.py options

The runtests.py script recognizes several other options, to see the full list of options run:

./mach mochitest --help

Other resources:

Writing tests

A Mochitest test is simply an HTML, XHTML or XUL file that contains some JavaScript to test for some condition(s).

You can use Mochitest maker to run most tests without having to build Mozilla.

Try to avoid Mochitest

Yes, really. For many things Mochitest is overkill. In general you should always try to use one of the lighter-weight testing frameworks. For example, if you only want to test a single XPCOM component, you should use xpcshell. On the other hand, there are some things that Mochitest cannot do, or isn't designed to do. For example, for visual output tests you should try to use the reftest framework. For more information on the different types of automated testing frameworks see Mozilla automated testing.

Best practices

Please follow the tips for Avoiding intermittent test failures when writing tests.

Test templates

You can avoid typing out boilerplate by using the gen_template.pl perl script to generate a test template. This script takes two optional arguments:

  1. -b : a bug number
  2. -type : template type. {html|xhtml|xul|chrome}. defaults to html.

For example:

cd testing/mochitest/
perl gen_template.pl -b=123456 --type=plain > path/to/test_bug123456.html
perl gen_template.pl -b=123456 --type=xul > path/to/test_bug123456.xul

Note: Mochitest requires the file name of all tests to begin with the string "test_", while mochitest-browser-chrome tests must begin with "browser_". See the section below for help on deciding where your tests should go in the tree.

In addition to Mochitest boilerplate code, the script generates an element with the id, 'content', and an element with the id, 'display'. Your test may manipulate these elements as well as other elements you add to the page.

Asynchronous Tests

Sometimes tests involve asynchronous patterns, such as waiting for events or observers. In these cases, you will need to call waitForExplicitFinish and finish like so:

SimpleTest.waitForExplicitFinish();
addEventListener("keypress", function() {
  // ... run test ...
  SimpleTest.finish();
}, false);
// ... simulate key press ...

If you need more time, requestLongerTimeout(number) can be quite useful.  requestLongerTimeout() takes an integer factor that is a multiplier for the the default 45 seconds timeout. So a factor of 2 means: "Wait for at last 90s (2*45s)". This is really useful if you want to pause exection to do a little debugging.

Test functions

Each test must contain some JavaScript that will run and tell Mochitest whether the test has passed or failed. SimpleTest.js provides a number of functions for the test to use, to communicate the results back to Mochitest. These include:

  • ok(expressionThatShouldBeTrue, "Description of the check") -- tests a value for its truthfulness
  • is(actualValue, expectedValue, "Description of the check") -- compares two values (using Object.is)
  • isnot(actualValue, unexpectedValue, "Description of the check") -- opposite of is()

If you want to include a test for something that currently fails, don't just comment it out!  Instead, use one of the "todo" equivalents so Tinderbox can notice if it suddenly starts passing (at which point the test can be reenabled):

  • todo(falseButShouldBeTrue, "Description of the check")
  • todo_is(actualValue, expectedValue, "Description of the check")
  • todo_isnot(actualValue, unexpectedValue, "Description of the check")

Tests can call a function info("Message string") to write a message to the test log.

Helper functions

Right now, useful helpers derived from MochiKit are available in testing/mochitest/tests/SimpleTest/SimpleTest.js. Although all of Mochikit is available at testing/mochitest/MochiKit, only include files that you require to minimize test load times. bug 367569 added sendChar, sendKey, and sendString helpers. These are available in testing/mochitest/tests/SimpleTest/EventUtils.js.

If you need to access some data files from your mochitest, you can get an URI for them by using SimpleTest.getTestFileURL("relative/path/to/data.file") . Then you can eventually fetch their content by using XMLHttpRequest.

Adding tests to the tree

Once you've written a new test you need to add it to the Mozilla source tree and tell the build system about it so that the Mozilla tinderboxes will run it automatically.

Choosing a location

New Mochitest tests should go somewhere close to the code they are testing, hopefully in the same module, so that ownership of the test cases is clear. For example, if you create a new test for some HTML feature, you probably want to put the test in content/html/content/test or content/html/document/test. If a test directory does not exist near the code you are testing, you can add a new test directory as the patch in bug 368531 demonstrates.

Add an entry to mochitest.ini

To tell the build system about your new test you need to add your test file to the mochitest.ini in your test's directory. For example, let's say you are adding a new test called 'test_new_feature.html'. Simply edit the mochitest.ini (or chrome.ini etc. depending on the mochitest flavor) in the same directory and add a new line:

[test_new_feature.html]

Optionally, you can specify metadata for your test, like whether to skip the test on certain platforms:

[test_new_feature.html]
skip-if = os == 'win'

The mochitest.ini format, which is recognized by the parser, defines a long list of metadata.

Adding a new mochitest.ini or chrome.ini file

If a manifest.ini or chrome.ini file does not exist in the test directory where you want to add a test, add them and update the moz.build file in the directory for your test. For example, in gfx/layers/moz.build, we add these two manifest files:

MOCHITEST_MANIFESTS += ['apz/test/mochitest.ini']
MOCHITEST_CHROME_MANIFESTS += ['apz/test/chrome.ini']

Building and running new tests

Before committing a new test you should check that the Makefile.in changes are correct and that your tests pass as you expect them to. To check your test, first export it to the Mochitest directory by running the command:

./mach build path

giving the path of the directory that contains the test file. Now run Mochitest as explained above.

SSL and https-enabled tests

Mochitests must be run from http://mochi.test/ to succeed however, some tests may require use of additional protocols, hosts, or ports to test cross-origin functionality. The Mochitest harness addresses this need by mirroring all content of the original server onto a variety of other servers through the magic of proxy autoconfig and SSL tunneling. The full list of schemes, hosts, and ports on which tests are served, is specified in build/pgo/server-locations.txt.  The origins described there are not the same, as some of them specify particular SSL certificates for testing purposes, while some allow pages on that server to request elevated privileges; read the file for full details.

How it works

The Mochitest harness includes preference values which cause the browser to use proxy autoconfig to match requested URLs with servers.  The network.proxy.autoconfig_url preference is set to a data: URL that encodes the JavaScript function, FindProxyForURL, which determines the host of the given URL.  In the case of SSL sites to be mirrored, the function maps them to an SSL tunnel, which transparently forwards the traffic to the actual server, as per the description of the CONNECT method given in RFC 2817.  In this manner a single HTTP server at http://127.0.0.1:8888 can successfully emulate dozens of servers at distinct locations.

For further details on Mochitest SSL functionality and how to modify it to change a certificate or add a new https server, see Modifying Mochitest SSL behavior.

Getting Stack Traces

To get stack when Mochitest crashes:

  1. Get a minidump_stackwalk binary for your platform from http://hg.mozilla.org/build/tools/file/tip/breakpad/
  2. Set the MINIDUMP_STACKWALK environment variable to point to the absolute path of the binary.

If the resulting stack trace doesn't have line numbers, run mach buildsymbols to generate the requisite symbol files.  See Building Firefox with Debug Symbols for more information.

FAQ

What if my tests aren't done when onload fires?

Call SimpleTest.waitForExplicitFinish() before onload fires. Then, when you're done, call SimpleTest.finish().

How can I get the full log output for my test in automation for debugging?

Add the following to your test:

SimpleTest.requestCompleteLog();

What if I need to change a preference to run my test?

The SpecialPowers object provides APIs to get and set preferences.

var oldVal = SpecialPowers.getIntPref("dom.max_script_run_time");
SpecialPowers.setIntPref("dom.max_script_run_time", 0);

// do what you need

SpecialPowers.setIntPref("dom.max_script_run_time", oldVal);

If you need to change a pref when running a test locally, you can set the environment variable EXTRA_TEST_ARGS when running the mochitest make target.

./mach mochitest --setpref="javascript.options.jit.chrome=false" somePath/someTestFile.html

Equally, if you need to change a string pref:

./mach mochitest --setpref="webgl.osmesa=string with whitespace" somePath/someTestFile.html

Can tests be run under a chrome URL?

Yes, use ./mach mochitest-chrome. Keep in mind that the xpcshell test harness should be your first choice for XPCOM testing. Only use mochitest if you need events, browser features, networking, and so on.

How can I get around the error "Permission denied to get property XPCComponents.classes"?

If your test needs to perform very specific privileged actions, you should either use one of the existing SpecialPowers APIs, or see if you can add a new API to the SpecialPowers object. If your test needs to use a wide variety of privileged objects and APIs, you should write your test as a Chrome Mochitest instead.

Do not use enablePrivilege in new tests. It will be removed in a future version of Gecko.

How do I change the HTTP headers or status sent with a file used in a Mochitest?

Create a text file next to the file whose headers you want to modify. The name of the text file should be the name of the file whose headers you're modifying followed by ^headers^. For example, if you have a file foo.jpg, the text file should be named foo.jpg^headers^. (Don't try to actually use the headers file in any other way in the test, because the HTTP server's hidden-file functionality prevents any file ending in exactly one ^ from being served.) Edit the file to contain the headers and/or status you want to set, like so:

HTTP 404 Not Found
Content-Type: text/html
Random-Header-of-Doom: 17

The first line sets the HTTP status and a description (optional) associated with the file. This line is optional; you don't need it if you're fine with the normal response status and description. Any other lines in the file describe additional headers which you want to add or overwrite (most typically the Content-Type header, for the latter case) on the response. The format follows the conventions of HTTP, except that you don't need to have HTTP line endings and you can't use a header more than once (the last line for a particular header wins). The file may end with at most one blank line to match Unix text file conventions, but the trailing newline isn't strictly necessary.

How do I test issues which only show up when tests are run across domains?

The Mochitest harness runs one web server to serve tests, but through the magic of proxy autoconfig, all test files are available on a variety of different domains and ports. The full list of domains and ports on which tests are served, all of which serve exactly the same content as http://mochi.test/, is specified in build/pgo/server-locations.txt. If your test needs to violate cross-origin security to test something, consult with a Mochitest peer about your options, as they are still in development.

Do not use enablePrivilege in new tests. It will be removed in a future version of Gecko.

How do I write tests that check header values, method types, etc. of HTTP requests?

To write such a test, you simply need to write an SJS (server-side JavaScript) for it. An SJS is simply a JavaScript file with the extension .sjs which is loaded in a sandbox. Don't forget to reference it from your "mochitest.ini" file too!

[DEFAULT]
  support-files =
    test_file.sjs

The global property handleRequest defined by the script is then executed with request and response objects, and the script populates the response based on the information in the request.

Here's an example of a simple SJS:

function handleRequest(request, response) {
  //Allow cross-origin, so you can XHR to it!
  response.setHeader("Access-Control-Allow-Origin", "*", false);  
  // avoid confusing cache behaviors
  response.setHeader("Cache-Control", "no-cache", false);
  response.setHeader("Content-Type", "text/plain", false);
  response.write("Hello world!");
}

The file is run, for example, at etiher http://mochi.test:8888/tests/PATH/TO/YOUR/test_file.sjshttp://{server-location}/tests/PATH/TO/YOUR/test_file.sjs - see build/pgo/server-locations.txt for server locations! Note the "test/tests" bit at the front of the URL. If you want to actually execute the file, you need to reference it somehow. For instance, you can XHR to it OR you could use a HTML element:

var xhr = new XMLHttpRequest();
xhr.open("GET", "http://test/tests/dom/manifest/test/test_file.sjs");
xhr.onload = function(e){ console.log("loaded!", this.responseText)}
xhr.send();

The exact properties of the request and response parameters are defined in the nsIHttpRequestMetadata and nsIHttpResponse interfaces in nsIHttpServer.idl. However, here are a few useful ones:

Properties and methods of Request object

.scheme
string, the scheme of the request.
.host
string, the host of the request
.port
string, the port of the request.
.method
string, the HTTP method.
.httpVersion
string, the protocol version, typically "1.1".
.path
string, path of the request,
.headers
object, name and values representing the headers.
.queryString
string, the query string of the requested URL.
.bodyInputStream
??
.getHeader( name )
Gets a request header by name.
.hasHeader( name )
Boolean, checks if a header was provided with the request.

Note: The browser is free to cache responses generated by your script. If you ever want an SJS to return different data for multiple requests to the same URL, you should add a Cache-Control: no-cache header to the response to prevent the test from accidentally failing, especially if it's manually run multiple times in the same Mochitest session.

How do I keep state across loads of different server-side scripts?

Server-side scripts in Mochitest are run inside sandboxes, with a new sandbox created for each new load.  Consequently, any variables set in a handler don't persist across loads.  To support state storage, use the getState(k) and setState(k, v) methods defined on the global object.  These methods expose a key-value storage mechanism for the server, with keys and values as strings.  (Use JSON to store objects and other structured data.)  The myriad servers in Mochitest are in reality a single server with some proxying and tunnelling magic, so a stored state is the same in all servers at all times.

The getState and setState methods are scoped to the path being loaded.  For example, the absolute URLs /foo/bar/baz/foo/bar/baz?quux, and /foo/bar/baz#fnord all share the same state; the state for /foo/bar is entirely separate.  You should use per-path state whenever possible to avoid inter-test dependencies and bugs.  However, in rare cases it may be necessary for two scripts to collaborate in some manner, and it may not be possible to use a custom query string to request divergent behaviors from the script.  For this use case only you should use the getSharedState(k, v) and setSharedState(k, v) methods defined on the global object.  No restrictions are placed on access to this whole-server shared state, and any script may add new state that any other script may delete.  To avoid conflicts, you should use a key within a faux namespace so as to avoid accidental conflicts. For example, if you needed shared state for an HTML5 video test, you might use a key like dom.media.video:sharedState.

A further form of state storage is provided by the getObjectState(k) and setObjectState(k, v) methods, which will store any nsISupports object.  These methods reside on the nsIHttpServer interface, but a limitation of the sandbox object used by the server to process SJS responses means that the former is present in the SJS request handler's global environment with the signature getObjectState(k, callback), where callback is a function to be invoked by getObjectState with the object corresponding to the provided key as the sole argument.  Note that this value mapping requires the value to be an XPCOM object; an arbitrary JavaScript object with no QueryInterface method is insufficient.  If you wish to store a JavaScript object, you may find it useful to provide the object with a QueryInterface implementation and then make use of wrappedJSObject to reveal the actual JavaScript object through the wrapping performed by XPConnect.

For further details on state-saving mechanisms provided by httpd.js, see netwerk/test/httpserver/nsIHttpServer.idl and the nsIHttpServer.get(Shared|Object)?State methods.

How do I write a SJS script that responds asynchronously

Sometimes you need to respond to a request asynchronously, for example after waiting for a short period of time. You can do this by using the processAsync() and finish() functions on the response object passed to the handleRequest() function.

processAsync() must be called before returning from handleRequest(). Once called, you can at any point call methods on the request object to send more of the response. Once you are done, call the finish() function. For example you can use the setState()/getState() functions described above to store a request and later retrieve and finish it. However be aware that the browser often reorders requests and so your code must be resilient to that to avoid intermittent failures.

var timer = null;

function handleRequest(request, response)
{
  response.processAsync();
  response.setHeader("Content-Type", "text/plain", false);
  response.write("hello...");

  timer = Components.classes["@mozilla.org/timer;1"].createInstance(Components.interfaces.nsITimer);
  timer.initWithCallback(function()
  {
    response.write("world!");
    response.finish();
  }, 5 * 1000 /* milliseconds */, Components.interfaces.nsITimer.TYPE_ONE_SHOT);
}

For more details, see the processAsync() function documentation in netwerk/test/httpserver/nsIHttpServer.idl.

How do I get access to the files on the server as XPCOM objects from an SJS script? (1.9.3 or later)

If you need access to a file, because it's easier to store image data in a file than directly in an SJS script, use the presupplied SERVER_ROOT object state available to SJS scripts running in Mochitest:

function handleRequest(req, res)
{
  var file;
  getObjectState("SERVER_ROOT", function(serverRoot)
  {
    file = serverRoot.getFile("tests/content/media/test/320x240.ogv");
  });

  // file is now an XPCOM object referring to the given file
  res.write("file: " + file);
}

The path you specify is used as a path relative to the root directory served by httpd.js , and an nsIFile corresponding to the file at that location is returned.  Beware of typos: the file you specify doesn't actually have to exist because file objects are mere encapsulations of string paths.

Diagnosing and fixing leakcheck failures

Mochitests output a log of the windows and docshells that are created during the test during debug builds. At the end of the test, the test runner runs a leakcheck analysis to determine if any of them did not get cleaned up before the test was ended.

Leaks can happen for a variety of reasons. One common one is that a JavaScript event listener is retaining a reference that keeps the window alive.

// Add an observer.
Services.obs.addObserver(myObserver, "event-name");

// Make sure and clean it up, or it may leak!
Services.obs.removeObserver(myObserver, "event-name");

Other sources of issues include accidentally leaving a window, or iframe attached to the DOM, or setting an iframe's src to a blank string (creating an about:blank page), rather than removing the iframe.

Finding the leak can be difficult, but the first step is to reproduce it locally. Ensure you are on a debug build and the `MOZ_QUIET` environment flag is not enabled. The leakcheck test analyzes the test output. After reproducing the leak in the test, start commenting out code until the leak goes away. Then once the leak stop reproducing, find the exact location where it is happening.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据

词条统计

浏览:160 次

字数:51531

最后编辑:7 年前

编辑次数:0 次

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文