=======================Advanced testing topics=======================The request factory===================.. currentmodule:: django.test.. class:: RequestFactoryThe :class:`~django.test.RequestFactory` shares the same API asthe test client. However, instead of behaving like a browser, theRequestFactory provides a way to generate a request instance that canbe used as the first argument to any view. This means you can test aview function the same way as you would test any other function -- asa black box, with exactly known inputs, testing for specific outputs.The API for the :class:`~django.test.RequestFactory` is a slightlyrestricted subset of the test client API:* It only has access to the HTTP methods :meth:`~Client.get()`,:meth:`~Client.post()`, :meth:`~Client.put()`,:meth:`~Client.delete()`, :meth:`~Client.head()`,:meth:`~Client.options()`, and :meth:`~Client.trace()`.* These methods accept all the same arguments *except* for``follow``. Since this is just a factory for producingrequests, it's up to you to handle the response.* It does not support middleware. Session and authenticationattributes must be supplied by the test itself if requiredfor the view to function properly.Example-------The following is a unit test using the request factory::from django.contrib.auth.models import AnonymousUser, Userfrom django.test import RequestFactory, TestCasefrom .views import MyView, my_viewclass SimpleTest(TestCase):def setUp(self):# Every test needs access to the request factory.self.factory = RequestFactory()self.user = User.objects.create_user(username='jacob', email='jacob@…', password='top_secret')def test_details(self):# Create an instance of a GET request.request = self.factory.get('/customer/details')# Recall that middleware are not supported. You can simulate a# logged-in user by setting request.user manually.request.user = self.user# Or you can simulate an anonymous user by setting request.user to# an AnonymousUser instance.request.user = AnonymousUser()# Test my_view() as if it were deployed at /customer/detailsresponse = my_view(request)# Use this syntax for class-based views.response = MyView.as_view()(request)self.assertEqual(response.status_code, 200)AsyncRequestFactory-------------------.. class:: AsyncRequestFactory``RequestFactory`` creates WSGI-like requests. If you want to create ASGI-likerequests, including having a correct ASGI ``scope``, you can instead use``django.test.AsyncRequestFactory``.This class is directly API-compatible with ``RequestFactory``, with the onlydifference being that it returns ``ASGIRequest`` instances rather than``WSGIRequest`` instances. All of its methods are still synchronous callables.Arbitrary keyword arguments in ``defaults`` are added directly into the ASGIscope.Testing class-based views=========================In order to test class-based views outside of the request/response cycle youmust ensure that they are configured correctly, by calling:meth:`~django.views.generic.base.View.setup` after instantiation.For example, assuming the following class-based view:.. code-block:: python:caption: ``views.py``from django.views.generic import TemplateViewclass HomeView(TemplateView):template_name = 'myapp/home.html'def get_context_data(self, **kwargs):kwargs['environment'] = 'Production'return super().get_context_data(**kwargs)You may directly test the ``get_context_data()`` method by first instantiatingthe view, then passing a ``request`` to ``setup()``, before proceeding withyour test's code:.. code-block:: python:caption: ``tests.py``from django.test import RequestFactory, TestCasefrom .views import HomeViewclass HomePageTest(TestCase):def test_environment_set_in_context(self):request = RequestFactory().get('/')view = HomeView()view.setup(request)context = view.get_context_data()self.assertIn('environment', context).. _topics-testing-advanced-multiple-hosts:Tests and multiple host names=============================The :setting:`ALLOWED_HOSTS` setting is validated when running tests. Thisallows the test client to differentiate between internal and external URLs.Projects that support multitenancy or otherwise alter business logic based onthe request's host and use custom host names in tests must include those hostsin :setting:`ALLOWED_HOSTS`.The first option to do so is to add the hosts to your settings file. Forexample, the test suite for docs.djangoproject.com includes the following::from django.test import TestCaseclass SearchFormTestCase(TestCase):def test_empty_get(self):response = self.client.get('/en/dev/search/', HTTP_HOST='docs.djangoproject.dev:8000')self.assertEqual(response.status_code, 200)and the settings file includes a list of the domains supported by the project::ALLOWED_HOSTS = ['www.djangoproject.dev','docs.djangoproject.dev',...]Another option is to add the required hosts to :setting:`ALLOWED_HOSTS` using:meth:`~django.test.override_settings()` or:attr:`~django.test.SimpleTestCase.modify_settings()`. This option may bepreferable in standalone apps that can't package their own settings file orfor projects where the list of domains is not static (e.g., subdomains formultitenancy). For example, you could write a test for the domain``http://otherserver/`` as follows::from django.test import TestCase, override_settingsclass MultiDomainTestCase(TestCase):@override_settings(ALLOWED_HOSTS=['otherserver'])def test_other_domain(self):response = self.client.get('http://otherserver/foo/bar/')Disabling :setting:`ALLOWED_HOSTS` checking (``ALLOWED_HOSTS = ['*']``) whenrunning tests prevents the test client from raising a helpful error message ifyou follow a redirect to an external URL... _topics-testing-advanced-multidb:Tests and multiple databases============================.. _topics-testing-primaryreplica:Testing primary/replica configurations--------------------------------------If you're testing a multiple database configuration with primary/replica(referred to as master/slave by some databases) replication, this strategy ofcreating test databases poses a problem.When the test databases are created, there won't be any replication,and as a result, data created on the primary won't be seen on thereplica.To compensate for this, Django allows you to define that a database isa *test mirror*. Consider the following (simplified) example databaseconfiguration::DATABASES = {'default': {'ENGINE': 'django.db.backends.mysql','NAME': 'myproject','HOST': 'dbprimary',# ... plus some other settings},'replica': {'ENGINE': 'django.db.backends.mysql','NAME': 'myproject','HOST': 'dbreplica','TEST': {'MIRROR': 'default',},# ... plus some other settings}}In this setup, we have two database servers: ``dbprimary``, describedby the database alias ``default``, and ``dbreplica`` described by thealias ``replica``. As you might expect, ``dbreplica`` has been configuredby the database administrator as a read replica of ``dbprimary``, so innormal activity, any write to ``default`` will appear on ``replica``.If Django created two independent test databases, this would break anytests that expected replication to occur. However, the ``replica``database has been configured as a test mirror (using the:setting:`MIRROR <TEST_MIRROR>` test setting), indicating that undertesting, ``replica`` should be treated as a mirror of ``default``.When the test environment is configured, a test version of ``replica``will *not* be created. Instead the connection to ``replica``will be redirected to point at ``default``. As a result, writes to``default`` will appear on ``replica`` -- but because they are actuallythe same database, not because there is data replication between thetwo databases. As this depends on transactions, the tests must use:class:`~django.test.TransactionTestCase` instead of:class:`~django.test.TestCase`... _topics-testing-creation-dependencies:Controlling creation order for test databases---------------------------------------------By default, Django will assume all databases depend on the ``default``database and therefore always create the ``default`` database first.However, no guarantees are made on the creation order of any otherdatabases in your test setup.If your database configuration requires a specific creation order, youcan specify the dependencies that exist using the :setting:`DEPENDENCIES<TEST_DEPENDENCIES>` test setting. Consider the following (simplified)example database configuration::DATABASES = {'default': {# ... db settings'TEST': {'DEPENDENCIES': ['diamonds'],},},'diamonds': {# ... db settings'TEST': {'DEPENDENCIES': [],},},'clubs': {# ... db settings'TEST': {'DEPENDENCIES': ['diamonds'],},},'spades': {# ... db settings'TEST': {'DEPENDENCIES': ['diamonds', 'hearts'],},},'hearts': {# ... db settings'TEST': {'DEPENDENCIES': ['diamonds', 'clubs'],},}}Under this configuration, the ``diamonds`` database will be created first,as it is the only database alias without dependencies. The ``default`` and``clubs`` alias will be created next (although the order of creation of thispair is not guaranteed), then ``hearts``, and finally ``spades``.If there are any circular dependencies in the :setting:`DEPENDENCIES<TEST_DEPENDENCIES>` definition, an:exc:`~django.core.exceptions.ImproperlyConfigured` exception will be raised.Advanced features of ``TransactionTestCase``============================================.. attribute:: TransactionTestCase.available_apps.. warning::This attribute is a private API. It may be changed or removed withouta deprecation period in the future, for instance to accommodate changesin application loading.It's used to optimize Django's own test suite, which contains hundredsof models but no relations between models in different applications.By default, ``available_apps`` is set to ``None``. After each test, Djangocalls :djadmin:`flush` to reset the database state. This empties all tablesand emits the :data:`~django.db.models.signals.post_migrate` signal, whichrecreates one content type and four permissions for each model. Thisoperation gets expensive proportionally to the number of models.Setting ``available_apps`` to a list of applications instructs Django tobehave as if only the models from these applications were available. Thebehavior of ``TransactionTestCase`` changes as follows:- :data:`~django.db.models.signals.post_migrate` is fired before eachtest to create the content types and permissions for each model inavailable apps, in case they're missing.- After each test, Django empties only tables corresponding to models inavailable apps. However, at the database level, truncation may cascade torelated models in unavailable apps. Furthermore:data:`~django.db.models.signals.post_migrate` isn't fired; it will befired by the next ``TransactionTestCase``, after the correct set ofapplications is selected.Since the database isn't fully flushed, if a test creates instances ofmodels not included in ``available_apps``, they will leak and they maycause unrelated tests to fail. Be careful with tests that use sessions;the default session engine stores them in the database.Since :data:`~django.db.models.signals.post_migrate` isn't emitted afterflushing the database, its state after a ``TransactionTestCase`` isn't thesame as after a ``TestCase``: it's missing the rows created by listenersto :data:`~django.db.models.signals.post_migrate`. Considering the:ref:`order in which tests are executed <order-of-tests>`, this isn't anissue, provided either all ``TransactionTestCase`` in a given test suitedeclare ``available_apps``, or none of them.``available_apps`` is mandatory in Django's own test suite... attribute:: TransactionTestCase.reset_sequencesSetting ``reset_sequences = True`` on a ``TransactionTestCase`` will makesure sequences are always reset before the test run::class TestsThatDependsOnPrimaryKeySequences(TransactionTestCase):reset_sequences = Truedef test_animal_pk(self):lion = Animal.objects.create(name="lion", sound="roar")# lion.pk is guaranteed to always be 1self.assertEqual(lion.pk, 1)Unless you are explicitly testing primary keys sequence numbers, it isrecommended that you do not hard code primary key values in tests.Using ``reset_sequences = True`` will slow down the test, since the primarykey reset is a relatively expensive database operation... _topics-testing-enforce-run-sequentially:Enforce running test classes sequentially=========================================If you have test classes that cannot be run in parallel (e.g. because theyshare a common resource), you can use ``django.test.testcases.SerializeMixin``to run them sequentially. This mixin uses a filesystem ``lockfile``.For example, you can use ``__file__`` to determine that all test classes in thesame file that inherit from ``SerializeMixin`` will run sequentially::import osfrom django.test import TestCasefrom django.test.testcases import SerializeMixinclass ImageTestCaseMixin(SerializeMixin):lockfile = __file__def setUp(self):self.filename = os.path.join(temp_storage_dir, 'my_file.png')self.file = create_file(self.filename)class RemoveImageTests(ImageTestCaseMixin, TestCase):def test_remove_image(self):os.remove(self.filename)self.assertFalse(os.path.exists(self.filename))class ResizeImageTests(ImageTestCaseMixin, TestCase):def test_resize_image(self):resize_image(self.file, (48, 48))self.assertEqual(get_image_size(self.file), (48, 48)).. _testing-reusable-applications:Using the Django test runner to test reusable applications==========================================================If you are writing a :doc:`reusable application </intro/reusable-apps>`you may want to use the Django test runner to run your own test suiteand thus benefit from the Django testing infrastructure.A common practice is a *tests* directory next to the application code, with thefollowing structure::runtests.pypolls/__init__.pymodels.py...tests/__init__.pymodels.pytest_settings.pytests.pyLet's take a look inside a couple of those files:.. code-block:: python:caption: ``runtests.py``#!/usr/bin/env pythonimport osimport sysimport djangofrom django.conf import settingsfrom django.test.utils import get_runnerif __name__ == "__main__":os.environ['DJANGO_SETTINGS_MODULE'] = 'tests.test_settings'django.setup()TestRunner = get_runner(settings)test_runner = TestRunner()failures = test_runner.run_tests(["tests"])sys.exit(bool(failures))This is the script that you invoke to run the test suite. It sets up theDjango environment, creates the test database and runs the tests.For the sake of clarity, this example contains only the bare minimumnecessary to use the Django test runner. You may want to addcommand-line options for controlling verbosity, passing in specific testlabels to run, etc... code-block:: python:caption: ``tests/test_settings.py``SECRET_KEY = 'fake-key'INSTALLED_APPS = ["tests",]This file contains the :doc:`Django settings </topics/settings>`required to run your app's tests.Again, this is a minimal example; your tests may require additionalsettings to run.Since the *tests* package is included in :setting:`INSTALLED_APPS` whenrunning your tests, you can define test-only models in its ``models.py``file... _other-testing-frameworks:Using different testing frameworks==================================Clearly, :mod:`unittest` is not the only Python testing framework. While Djangodoesn't provide explicit support for alternative frameworks, it does provide away to invoke tests constructed for an alternative framework as if they werenormal Django tests.When you run ``./manage.py test``, Django looks at the :setting:`TEST_RUNNER`setting to determine what to do. By default, :setting:`TEST_RUNNER` points to``'django.test.runner.DiscoverRunner'``. This class defines the default Djangotesting behavior. This behavior involves:#. Performing global pre-test setup.#. Looking for tests in any file below the current directory whose name matchesthe pattern ``test*.py``.#. Creating the test databases.#. Running ``migrate`` to install models and initial data into the testdatabases.#. Running the :doc:`system checks </topics/checks>`.#. Running the tests that were found.#. Destroying the test databases.#. Performing global post-test teardown.If you define your own test runner class and point :setting:`TEST_RUNNER` atthat class, Django will execute your test runner whenever you run``./manage.py test``. In this way, it is possible to use any test frameworkthat can be executed from Python code, or to modify the Django test executionprocess to satisfy whatever testing requirements you may have... _topics-testing-test_runner:Defining a test runner----------------------.. currentmodule:: django.test.runnerA test runner is a class defining a ``run_tests()`` method. Django shipswith a ``DiscoverRunner`` class that defines the default Django testingbehavior. This class defines the ``run_tests()`` entry point, plus aselection of other methods that are used by ``run_tests()`` to set up, executeand tear down the test suite... class:: DiscoverRunner(pattern='test*.py', top_level=None, verbosity=1, interactive=True, failfast=False, keepdb=False, reverse=False, debug_mode=False, debug_sql=False, parallel=0, tags=None, exclude_tags=None, test_name_patterns=None, pdb=False, buffer=False, enable_faulthandler=True, timing=True, shuffle=False, logger=None, **kwargs)``DiscoverRunner`` will search for tests in any file matching ``pattern``.``top_level`` can be used to specify the directory containing yourtop-level Python modules. Usually Django can figure this out automatically,so it's not necessary to specify this option. If specified, it shouldgenerally be the directory containing your ``manage.py`` file.``verbosity`` determines the amount of notification and debug informationthat will be printed to the console; ``0`` is no output, ``1`` is normaloutput, and ``2`` is verbose output.If ``interactive`` is ``True``, the test suite has permission to ask theuser for instructions when the test suite is executed. An example of thisbehavior would be asking for permission to delete an existing testdatabase. If ``interactive`` is ``False``, the test suite must be able torun without any manual intervention.If ``failfast`` is ``True``, the test suite will stop running after thefirst test failure is detected.If ``keepdb`` is ``True``, the test suite will use the existing database,or create one if necessary. If ``False``, a new database will be created,prompting the user to remove the existing one, if present.If ``reverse`` is ``True``, test cases will be executed in the oppositeorder. This could be useful to debug tests that aren't properly isolatedand have side effects. :ref:`Grouping by test class <order-of-tests>` ispreserved when using this option. This option can be used in conjunctionwith ``--shuffle`` to reverse the order for a particular random seed.``debug_mode`` specifies what the :setting:`DEBUG` setting should beset to prior to running tests.``parallel`` specifies the number of processes. If ``parallel`` is greaterthan ``1``, the test suite will run in ``parallel`` processes. If there arefewer test cases than configured processes, Django will reduce the numberof processes accordingly. Each process gets its own database. This optionrequires the third-party ``tblib`` package to display tracebacks correctly.``tags`` can be used to specify a set of :ref:`tags for filtering tests<topics-tagging-tests>`. May be combined with ``exclude_tags``.``exclude_tags`` can be used to specify a set of:ref:`tags for excluding tests <topics-tagging-tests>`. May be combinedwith ``tags``.If ``debug_sql`` is ``True``, failing test cases will output SQL querieslogged to the :ref:`django.db.backends logger <django-db-logger>` as wellas the traceback. If ``verbosity`` is ``2``, then queries in all tests areoutput.``test_name_patterns`` can be used to specify a set of patterns forfiltering test methods and classes by their names.If ``pdb`` is ``True``, a debugger (``pdb`` or ``ipdb``) will be spawned ateach test error or failure.If ``buffer`` is ``True``, outputs from passing tests will be discarded.If ``enable_faulthandler`` is ``True``, :py:mod:`faulthandler` will beenabled.If ``timing`` is ``True``, test timings, including database setup and totalrun time, will be shown.If ``shuffle`` is an integer, test cases will be shuffled in a random orderprior to execution, using the integer as a random seed. If ``shuffle`` is``None``, the seed will be generated randomly. In both cases, the seed willbe logged and set to ``self.shuffle_seed`` prior to running tests. Thisoption can be used to help detect tests that aren't properly isolated.:ref:`Grouping by test class <order-of-tests>` is preserved when using thisoption.``logger`` can be used to pass a Python :py:ref:`Logger object <logger>`.If provided, the logger will be used to log messages instead of printing tothe console. The logger object will respect its logging level rather thanthe ``verbosity``.Django may, from time to time, extend the capabilities of the test runnerby adding new arguments. The ``**kwargs`` declaration allows for thisexpansion. If you subclass ``DiscoverRunner`` or write your own testrunner, ensure it accepts ``**kwargs``.Your test runner may also define additional command-line options.Create or override an ``add_arguments(cls, parser)`` class method and addcustom arguments by calling ``parser.add_argument()`` inside the method, sothat the :djadmin:`test` command will be able to use those arguments... versionadded:: 4.0The ``logger`` and ``shuffle`` arguments were added.Attributes~~~~~~~~~~.. attribute:: DiscoverRunner.test_suiteThe class used to build the test suite. By default it is set to``unittest.TestSuite``. This can be overridden if you wish to implementdifferent logic for collecting tests... attribute:: DiscoverRunner.test_runnerThis is the class of the low-level test runner which is used to executethe individual tests and format the results. By default it is set to``unittest.TextTestRunner``. Despite the unfortunate similarity innaming conventions, this is not the same type of class as``DiscoverRunner``, which covers a broader set of responsibilities. Youcan override this attribute to modify the way tests are run and reported... attribute:: DiscoverRunner.test_loaderThis is the class that loads tests, whether from TestCases or modules orotherwise and bundles them into test suites for the runner to execute.By default it is set to ``unittest.defaultTestLoader``. You can overridethis attribute if your tests are going to be loaded in unusual ways.Methods~~~~~~~.. method:: DiscoverRunner.run_tests(test_labels, **kwargs)Run the test suite.``test_labels`` allows you to specify which tests to run and supportsseveral formats (see :meth:`DiscoverRunner.build_suite` for a list ofsupported formats)... deprecated:: 4.0``extra_tests`` is a list of extra ``TestCase`` instances to add to thesuite that is executed by the test runner. These extra tests are run inaddition to those discovered in the modules listed in ``test_labels``.This method should return the number of tests that failed... classmethod:: DiscoverRunner.add_arguments(parser)Override this class method to add custom arguments accepted by the:djadmin:`test` management command. See:py:meth:`argparse.ArgumentParser.add_argument()` for details about addingarguments to a parser... method:: DiscoverRunner.setup_test_environment(**kwargs)Sets up the test environment by calling:func:`~django.test.utils.setup_test_environment` and setting:setting:`DEBUG` to ``self.debug_mode`` (defaults to ``False``)... method:: DiscoverRunner.build_suite(test_labels=None, **kwargs)Constructs a test suite that matches the test labels provided.``test_labels`` is a list of strings describing the tests to be run. A testlabel can take one of four forms:* ``path.to.test_module.TestCase.test_method`` -- Run a single test methodin a test case.* ``path.to.test_module.TestCase`` -- Run all the test methods in a testcase.* ``path.to.module`` -- Search for and run all tests in the named Pythonpackage or module.* ``path/to/directory`` -- Search for and run all tests below the nameddirectory.If ``test_labels`` has a value of ``None``, the test runner will search fortests in all files below the current directory whose names match its``pattern`` (see above)... deprecated:: 4.0``extra_tests`` is a list of extra ``TestCase`` instances to add to thesuite that is executed by the test runner. These extra tests are run inaddition to those discovered in the modules listed in ``test_labels``.Returns a ``TestSuite`` instance ready to be run... method:: DiscoverRunner.setup_databases(**kwargs)Creates the test databases by calling:func:`~django.test.utils.setup_databases`... method:: DiscoverRunner.run_checks(databases)Runs the :doc:`system checks </topics/checks>` on the test ``databases``... method:: DiscoverRunner.run_suite(suite, **kwargs)Runs the test suite.Returns the result produced by the running the test suite... method:: DiscoverRunner.get_test_runner_kwargs()Returns the keyword arguments to instantiate the``DiscoverRunner.test_runner`` with... method:: DiscoverRunner.teardown_databases(old_config, **kwargs)Destroys the test databases, restoring pre-test conditions by calling:func:`~django.test.utils.teardown_databases`... method:: DiscoverRunner.teardown_test_environment(**kwargs)Restores the pre-test environment... method:: DiscoverRunner.suite_result(suite, result, **kwargs)Computes and returns a return code based on a test suite, and the resultfrom that test suite... method:: DiscoverRunner.log(msg, level=None).. versionadded:: 4.0If a ``logger`` is set, logs the message at the given integer`logging level`_ (e.g. ``logging.DEBUG``, ``logging.INFO``, or``logging.WARNING``). Otherwise, the message is printed to the console,respecting the current ``verbosity``. For example, no message will beprinted if the ``verbosity`` is 0, ``INFO`` and above will be printed ifthe ``verbosity`` is at least 1, and ``DEBUG`` will be printed if it is atleast 2. The ``level`` defaults to ``logging.INFO``... _`logging level`: https://docs.python.org/3/library/logging.html#levelsTesting utilities-----------------``django.test.utils``~~~~~~~~~~~~~~~~~~~~~.. module:: django.test.utils:synopsis: Helpers to write custom test runners.To assist in the creation of your own test runner, Django provides a number ofutility methods in the ``django.test.utils`` module... function:: setup_test_environment(debug=None)Performs global pre-test setup, such as installing instrumentation for thetemplate rendering system and setting up the dummy email outbox.If ``debug`` isn't ``None``, the :setting:`DEBUG` setting is updated to itsvalue... function:: teardown_test_environment()Performs global post-test teardown, such as removing instrumentation fromthe template system and restoring normal email services... function:: setup_databases(verbosity, interactive, *, time_keeper=None, keepdb=False, debug_sql=False, parallel=0, aliases=None, serialized_aliases=None, **kwargs)Creates the test databases.Returns a data structure that provides enough detail to undo the changesthat have been made. This data will be provided to the:func:`teardown_databases` function at the conclusion of testing.The ``aliases`` argument determines which :setting:`DATABASES` aliases testdatabases should be set up for. If it's not provided, it defaults to all of:setting:`DATABASES` aliases.The ``serialized_aliases`` argument determines what subset of ``aliases``test databases should have their state serialized to allow usage of the:ref:`serialized_rollback <test-case-serialized-rollback>` feature. Ifit's not provided, it defaults to ``aliases``... versionchanged:: 4.0The ``serialized_aliases`` kwarg was added... function:: teardown_databases(old_config, parallel=0, keepdb=False)Destroys the test databases, restoring pre-test conditions.``old_config`` is a data structure defining the changes in the databaseconfiguration that need to be reversed. It's the return value of the:meth:`setup_databases` method.``django.db.connection.creation``~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.. currentmodule:: django.db.connection.creationThe creation module of the database backend also provides some utilities thatcan be useful during testing... function:: create_test_db(verbosity=1, autoclobber=False, serialize=True, keepdb=False)Creates a new test database and runs ``migrate`` against it.``verbosity`` has the same behavior as in ``run_tests()``.``autoclobber`` describes the behavior that will occur if adatabase with the same name as the test database is discovered:* If ``autoclobber`` is ``False``, the user will be asked toapprove destroying the existing database. ``sys.exit`` iscalled if the user does not approve.* If ``autoclobber`` is ``True``, the database will be destroyedwithout consulting the user.``serialize`` determines if Django serializes the database into anin-memory JSON string before running tests (used to restore the databasestate between tests if you don't have transactions). You can set this to``False`` to speed up creation time if you don't have any test classeswith :ref:`serialized_rollback=True <test-case-serialized-rollback>`.If you are using the default test runner, you can control this with thethe :setting:`SERIALIZE <TEST_SERIALIZE>` entry in the :setting:`TEST<DATABASE-TEST>` dictionary.``keepdb`` determines if the test run should use an existingdatabase, or create a new one. If ``True``, the existingdatabase will be used, or created if not present. If ``False``,a new database will be created, prompting the user to removethe existing one, if present.Returns the name of the test database that it created.``create_test_db()`` has the side effect of modifying the value of:setting:`NAME` in :setting:`DATABASES` to match the name of the testdatabase... function:: destroy_test_db(old_database_name, verbosity=1, keepdb=False)Destroys the database whose name is the value of :setting:`NAME` in:setting:`DATABASES`, and sets :setting:`NAME` to the value of``old_database_name``.The ``verbosity`` argument has the same behavior as for:class:`~django.test.runner.DiscoverRunner`.If the ``keepdb`` argument is ``True``, then the connection to thedatabase will be closed, but the database will not be destroyed... _topics-testing-code-coverage:Integration with ``coverage.py``================================Code coverage describes how much source code has been tested. It shows whichparts of your code are being exercised by tests and which are not. It's animportant part of testing applications, so it's strongly recommended to checkthe coverage of your tests.Django can be easily integrated with `coverage.py`_, a tool for measuring codecoverage of Python programs. First, `install coverage.py`_. Next, run thefollowing from your project folder containing ``manage.py``::coverage run --source='.' manage.py test myappThis runs your tests and collects coverage data of the executed files in yourproject. You can see a report of this data by typing following command::coverage reportNote that some Django code was executed while running tests, but it is notlisted here because of the ``source`` flag passed to the previous command.For more options like annotated HTML listings detailing missed lines, see the`coverage.py`_ docs... _coverage.py: https://coverage.readthedocs.io/.. _install coverage.py: https://pypi.org/project/coverage/