Base fields versus config fields, and how to handle the latter in tests

All fields are equal in Drupal 8! Back in Drupal 7 we had two different systems for data on entities, which worked fairly differently: entity properties that were defined as database fields and controlled by hardcoded form elements, and user-created fields on entities that had widgets and formatters. But now in Drupal 8, every value on an entity is a field, with the same configuration in the UI, access to the widgets and formatters, and whose data is accessed in the same way.

Is this the end of the story? No, not quite. For site builders, everything is unified, and that's great. For code that consumes data from entities, everything is unified, and that's great too. But for the developer actually working with those fields, whether a field is a base field or a config field still makes a big difference.

Config fields are easier...

Config fields are manipulated via a UI, and that makes them simple to work with.

Setup

Config fields win easily here: a few clicks in the UI to create the field, a few options to choose in dropdowns for the widget and formatter options and you're set. Export to config, commit: done!

Base fields are more fiddly here. You need to be familiar with the Entity system, and understand (or copy-pasta!) the base field definitions. Then, knowing the right widget and formatter to use and how to set their options requires close reading of the relevant plugin classes.

Update

Config fields win again: change in the UI, export to config, deploy!

For base fields, you once again need to know what you're doing with the code, and you now need a hook_update_N() to make the change to the database (now that entity updates are longer done automatically).

Removal

This is maybe a minor point, but while removing a config field is again just a UI click and a config export, removing a base field will again require a hook_update_N() to tell the Entity API to update the database tables.

...but base fields are more robust

Given the above, why bother with base fields? Well, if your fields just sit there holding data, you can stop reading. But if your fields are involved in your custom code, as a developer, this should give you an unpleasant feeling: your code in modules/custom/mymodule is dependent on configuration that lives in the site's config export.

Robustness

The fact that config fields are so easy to change in the UI is now a mark against them. Another developer could change a field without a full understanding of the code that works with it. The code might expect a value to always be there, but the field is now non-required. The code might expect certain options, but the field now has an extra one.

Instead of this brittle dependency, it feels much safer to have base fields defined closer to the code that makes use of them: in the same module.

Tests

The solution to the dependency problem is obviously tests, which would pick up any change that breaks the code's expectations of how the fields behave. But now we hit a big problem with config fields: your test site doesn't have those fields!

My first attempt at solving this problem was to use the Configuration development module. This allows you to export config from the site to a module's /config/install folder. A test that installs that module then gets those config items imported.

It's a quick and simple approach: when a test crashes because of a missing field, find it in the config folder, add it to the right module's .info.yml file, do drush cde MODULE and commit the changes and the newly created files.

This approach also works for all other sorts of config your test might need: taxonomy vocabularies, node types, roles, and more!

But now you have an additional maintenance burden because your site config is now in three places: the site itself, the config export, and now also in module config. And if developers who change config forget to also export to module config, your test is now no longer testing how the site actually works, and so will either fail, or worse, won't be actually covering what you expect any more.

Best of both worlds: import from config

To summarize: we want the convenience of config fields, but we want them to be close to our code and testable. If they're testable, we can maybe stand to forego the closeness.

My best solution to this so far is simple: allow tests to import configuration direct from the site config sync folder.

Here's the helper method I've been using:

/**
 * Imports config from a real local site's config folder.
 *
 * TODO: this currently only supports config entities.
 *
 * @param string $site_key
 *   The site key, that is, the subfolder in /config in which to find the
 *   config files.
 * @param string $config_name
 *   The config name. This is the same as the YML filename without the
 *   extension. Note that this is not checked for dependencies.
 *
 * @throws \Exception
 *   Throws an exception if the config can't be imported.
 */
protected function importSiteConfig(string $site_key, string $config_name) {
  $storage = $this->container->get('config.storage.sync');

  // Code cribbed from Config Devel module's ConfigImporterExporter.
  $config_filename = "../config/{$site_key}/{$config_name}.yml";

  if (!file_exists($config_filename)) {
    throw new \Exception("Unable to find config file $config_filename to import from.");
  }

  $contents = @file_get_contents($config_filename);

  $data = $storage->decode($contents);
  if (!$data) {
    throw new \Exception("Failed to import config $config_name from site $site_key.");
  }

  // This assumes we only ever import entities from site config for tests,
  // which so far is the case.
  $entity_type_id = $this->container->get('config.manager')->getEntityTypeIdByName($config_name);

  if (empty($entity_type_id)) {
    throw new \Exception("Non-entity config import not yet supported!");
  }

  $entity = $this->container->get('entity_type.manager')->getStorage($entity_type_id)->create($data);
  $entity->save();
}

Use it in your test's setUp() or test methods like this:

  // Import configuration from the default site config files.
  $this->importSiteConfig('default', 'field.storage.node.field_my_field');
  $this->importSiteConfig('default', 'field.field.node.article.field_my_field');

As you can see from the documentation, this so far only handles config that is entities. I've not yet had a use for importing config that's not an entity (and just about all config items are entities except for the ones that are a collection of a module's settings). And it doesn't check for dependencies: you'll need to import them in the right order (field storage before field) and ensure the modules that provide the various things are enabled in the test.

I should mention for the sake of completeness that there's another sort of field, sort of: bundle fields. These are in code like base fields, but limited to a particular bundle. They also have a variety of problems as the system that support them is somewhat incomplete.

Finally, it occurs to me that another way to bridge the gap would be to allow editing base fields in the UI, and then export that back to code consisting of the BaseFieldDefinition calls. But hang on... haven't I just reinvented Drupal 7-era Features?

UPDATE

It turned out this was crashing when importing fields with value options. This code (and that in Config Devel module, which is where I cribbed it from) wasn't properly using the Config API. Here's updated code, which incidentally, also now can handle non-entity config:

  /**
   * Imports config from a real local site's config folder.
   *
   * Needs config module to be enabled.
   *
   * @param string $site_key
   *   The site key, that is, the subfolder in /config in which to find the
   *   config files.
   * @param string $config_name
   *   The config name. This is the same as the YML filename without the
   *   extension.
   *
   * @throws \Exception
   *   Throws an exception if the config can't be imported.
   */
  protected function importSiteConfig(string $site_key, string $config_name) {
    $storage = $this->container->get('config.storage.sync');

    // Code cribbed from Config Devel module's ConfigImporterExporter.
    $config_filename = "../config/{$site_key}/{$config_name}.yml";

    if (!file_exists($config_filename)) {
      throw new \Exception("Unable to find config file $config_filename to import from.");
    }

    $contents = @file_get_contents($config_filename);

    $data = $storage->decode($contents);
    if (!$data) {
      throw new \Exception("Failed to import config $config_name from site $site_key.");
    }

    unset($data['uuid']);

    // We have to partially mock the source storage, because otherwise
    // SystemConfigSubscriber::onConfigImporterValidateSiteUUID() will complain
    // because the source config doesn't contain a system.site config. (For
    // general information, the single-import UI in core doesn't hit this
    // problem because it TOTALLY cheats and pretends its source is the full
    // site config! Though who knows how it manages not to trip up when the site
    // config folder is empty!)
    $source_storage = $this->getMockBuilder(StorageReplaceDataWrapper::class)
      ->setConstructorArgs([$this->container->get('config.storage')])
      ->setMethods(['exists'])
      ->getMock();
    // Satisfy SystemConfigSubscriber that we have a system.site config.
    $source_storage->expects($this->any())
      ->method('exists')
      ->willReturn(TRUE);

    $source_storage->replaceData($config_name, $data);

    // Similarly mock the storage comparer.
    $storage_comparer = $this->getMockBuilder(StorageComparer::class)
      ->setConstructorArgs([$source_storage, $this->container->get('config.storage')])
      ->setMethods(['validateSiteUuid'])
      ->getMock();
    // Satisfy SystemConfigSubscriber that system.site config is valid.
    $storage_comparer->expects($this->any())
      ->method('validateSiteUuid')
      ->willReturn(TRUE);

    $storage_comparer->createChangelist();

    $config_importer = new ConfigImporter(
      $storage_comparer->createChangelist(),
      $this->container->get('event_dispatcher'),
      $this->container->get('config.manager'),
      $this->container->get('lock'),
      $this->container->get('config.typed'),
      $this->container->get('module_handler'),
      $this->container->get('module_installer'),
      $this->container->get('theme_handler'),
      $this->container->get('string_translation'),
      $this->container->get('extension.list.module')
    );

    $config_importer->import();
  }

}

UPDATE 2

I've found an occasional problem with site config items that have third-party settings from modules that aren't relevant to the test. Examples include Menu UI settings in node types, and contrib translation settings in fields. Rather than enable extra modules in the test, these settings can be hacked out of the data after we decode it from the YAML. Here's an updated version of the code.


use Drupal\Core\Config\ConfigImporter; use Drupal\Core\Config\StorageComparer; use Drupal\config\StorageReplaceDataWrapper; /** * Imports config from a real local site's config folder. * * @param string $site_key * The site key, that is, the subfolder in /config in which to find the * config files. * @param string $config_name * The config name. This is the same as the YML filename without the * extension. * @param array $third_party_filter * (optional) Keys in the config item's third party settings that are to be * removed prior to import. This prevents unwanted dependencies on modules * that are not relevant to a test. Defaults to an empty array; no keys * removed. * * @throws \Exception * Throws an exception if the config can't be imported. */ protected function importSiteConfig(string $site_key, string $config_name, array $third_party_filter = []) { $storage = $this->container->get('config.storage.sync'); // Code cribbed from Config Devel module's ConfigImporterExporter. $config_filename = "../config/{$site_key}/{$config_name}.yml"; if (!file_exists($config_filename)) { throw new \Exception("Unable to find config file $config_filename to import from."); } $contents = @file_get_contents($config_filename); $data = $storage->decode($contents); if (!$data) { throw new \Exception("Failed to import config $config_name from site $site_key."); } unset($data['uuid']); // Remove third-party settings. foreach ($third_party_filter as $settings_key) { unset($data['third_party_settings'][$settings_key]); } // We have to partially mock the source storage, because otherwise // SystemConfigSubscriber::onConfigImporterValidateSiteUUID() will complain // because the source config doesn't contain a system.site config. (For // general information, the single-import UI in core doesn't hit this // problem because it TOTALLY cheats and pretends its source is the full // site config! Though who knows how it manages not to trip up when the site // config folder is empty!) $source_storage = $this->getMockBuilder(StorageReplaceDataWrapper::class) ->setConstructorArgs([$this->container->get('config.storage')]) ->setMethods(['exists']) ->getMock(); // Satisfy SystemConfigSubscriber that we have a system.site config. $source_storage->expects($this->any()) ->method('exists') ->willReturn(TRUE); $source_storage->replaceData($config_name, $data); // Similarly mock the storage comparer. $storage_comparer = $this->getMockBuilder(StorageComparer::class) ->setConstructorArgs([$source_storage, $this->container->get('config.storage')]) ->setMethods(['validateSiteUuid']) ->getMock(); // Satisfy SystemConfigSubscriber that system.site config is valid. $storage_comparer->expects($this->any()) ->method('validateSiteUuid') ->willReturn(TRUE); $storage_comparer->createChangelist(); $config_importer = new ConfigImporter( $storage_comparer->createChangelist(), $this->container->get('event_dispatcher'), $this->container->get('config.manager'), $this->container->get('lock'), $this->container->get('config.typed'), $this->container->get('module_handler'), $this->container->get('module_installer'), $this->container->get('theme_handler'), $this->container->get('string_translation'), $this->container->get('extension.list.module') ); $config_importer->import(); }

Debugging and Logging AJAX requests tests in Docksal

The hardest thing I find with tests is understanding errors. Every time I think I've got debugging output sorted, I find a new layer where it doesn't work, I've got nothing, and I'm in the dark with something that's crashing and I don't know why.

The first layer is simple: errors in your test code itself. For example, make a typo in your tests/src/Functional/MyTest.php and PHPUnit crashes and you see the error in the terminal.

But when it's site code that's crashing, you're dealing with a system that is being driven by code, and therefore, you can't see it. And that's a major obstacle to figuring out a problem.

The HTML output that Drupal's Functional and Functional Javascript tests produce is a huge help: every time your test code makes a request to the test site, an HTML file is written to the test files directory. If your site crashes when your test makes a request, you'll see the error and the backtrace there.

However, there's no such output when in a Functional Javascript test you cause an AJAX request. And while you can create a screenshot of what the page looks like after the request, our another HTML file of the page (see https://www.drupal.org/project/drupal/issues/3090498 for instructions how; the issue is about how to make that automatic but I have no idea how that might be possible), you can't see the actual error, because AJAX requests that fail just sit there doing nothing. There's nothing useful to see in the browser.

So we need to see logs. When a real site has an AJAX crash, with a human being-controlled web browser making the request, you can go and look in the logs. With a test site, the log table is zapped when the test is completed.

Fortunately, Drupal 8's pluggable logging means there are other ways of getting hold of them, more permanent ways.

I first tried log_stdout module. This outputs log errors to STDOUT. If you're running on Docksal, as I am, you have an extra layer to get though to see that. You can monitor the cli container with fin logs -f cli, and with that module add a | ag WATCHDOG to filter.

However, I wasn't seeing backtrace in this output, and I gave up figuring out why.

So I tried filelog module instead, which as the name implies, writes log to a simple text file. This needs a little bit more work, as by default it writes to 'public://logs'. This means that each run of the test gets its own log file, which is perhaps what you want, but for my own uses I wanted a single log file I could tail -f in a terminal window and have continual monitoring.

A quick bit of config setting in the test's setUp() does the trick:

$this->config('filelog.settings')
  ->set('location', '/var/www/docroot/sites/simpletest/logs')
  ->save();

And I think that's me at last sorted.

Multi-site search using Feeds and SearchAPI

[This is an old post that I wrote for System Seed's blog and meant to put on my own too but it fell off my radar until now. It's also about Drupal 7, but the general principle still applies.]

Handling clients with more than one site involves lots of decisions. And yet, it can sometimes seem like ultimately all that doesn't matter a hill of beans to the end-user, the site visitor. They won't care whether you use Domain module, multi-site, separate sites with common codebase, and so on. Because most people don't notice what's in their URL bar. They want ease of login, and ease of navigation. That translates into things such as the single sign-on that drupal.org uses, and common menus and headers, and also site search: they don’t care that it’s actually sites search, plural, they just want to find stuff.

For the University of North Carolina, who have a network of sites running on a range of different platforms, a unified search system was a key way of giving visitors the experience of a cohesive whole. The hub site, an existing Drupal 7 installation, needed to provide search results from across the whole family of sites.

This presented a few challenges. Naturally, I turned to Apache Solr. Hitherto, I've always considered Solr to be some sort of black magic, from the way in which it requires its own separate server (http not good enough for you?) to the mysteries of its configuration (both Drupal modules that integrate with it require you to dump a bunch of configuration files into your Solr installation). But Solr excels at what it sets out to do, and the Drupal modules around it are now mature enough that things just work out of the box. Even better, Search API module allows you to plug in a different search back-end, so you can develop locally using Drupal's own database as your search provider, with the intention of plugging it all into Solr when you deploy to servers.

One possible setup would have been to have the various sites each send their data into Solr directly. However, with the Pantheon platform this didn't look to be possible: in order to achieve close integration between Drupal and Solr, Pantheon locks down your Solr instance.

That left talking to Solr via Drupal.

Search API lets you define different datasources for your search data, and comes with one for each entity type on your site. In a datasource handler class, you can define how the datasource gets a list of IDs of things to index, and how it gets the content. So writing a custom datasource was one possibility.

Enter the next problem: the external sites that needed to be indexed only exposed their content to us in one format: RSS. In theory, you could have a Search API datasource which pulls in data from an RSS feed. But then you need to write a SearchAPI datasource class which knows how to parse RSS and extract the fields from it.

That sounded like reinventing Feeds, so I turned to that to see what I could do with it. Feeds normally saves data into Drupal entities, but maybe (I thought) there was a way to have the data be passed into SearchAPI for indexing, by writing a custom Feeds plugin?

However, this revealed a funny problem of the sort that you don’t consider the existence of until you stumble on it: Feeds works on cron runs, pulling in data from a remote source and saving it into Drupal somehow. But SearchAPI also works on cron runs, pulling data in, usually entities. How do you get two processes to communicate when they both want to be the active participant?

With time pressing, I took the simple option: define a custom entity type for Feeds to put its data into, and SearchAPI to read its data from. (I could have just used a node type, but then there would have been an ongoing burden of needing to ensure that type was excluded from any kind of interaction with nodes.)

Essentially, this custom entity type acted like a bucket: Feeds dumps data in, SearchAPI picks data out. As solutions go, not the most massively elegant, at first glance. But if you think about it, if I had gone down the route of SearchAPI fetching from RSS directly, then re-indexing would have been a really lengthy process, and could have had consequences for the performance of the sites whose content was being slurped up. A sensible approach would then have been to implement some sort of caching on our server, either of the RSS feeds as files, or the processed RSS data. And suddenly our custom entity bucket system doesn’t look so inelegant after all: it’s basically a cache that both Feeds and SearchAPI can talk to easily.

There were a few pitalls. With Search API, our search index needed to work on two entity types (nodes and the custom bucket entities), and while Search API on Drupal 7 allows this, its multiple entity type datasource handler had a few issues to iron out or learn to live with. The good news though is that the Drupal 8 version of Search API has the concept of multi-entity type search indexes at its core, rather than as a side feature: every index can handle multiple entity types, and there’s no such thing as a datasource for a single entity type.

With Feeds, I found that not all the configuration is exportable to Features for easy deployment. Everything about parsing the RSS feed into entities can be exported, except the actual URL, which is a separate piece of setup and not exportable. So I had to add a hook_updateN() to take care of setting that up.

The end result though was a site search that seamlessly returns results from multiple sites, allowing users to work with a network of disparate sites built on different technologies as if they were all the same thing. Which is what they were probably thinking they were all along anyway.

Controlling multiple sites with Drush 9

Drush 9 has removed dynamic site aliases. Site aliases are hardcoded in YAML files rather than declared in PHP. Sadly, that means that many tricks you could do with the declaration of the site aliases are no longer available.

The only grouping possible is based on the YAML filename. So for example, with the Acquia Cloud Site Factory site aliases generated by the 'blt recipes:aliases:init:acquia' command, you can run a command on the same site across different environments.

But what you can't do is run a command on all the sites in one environment.

One use case for this is checking whether a module is enabled on any sites, so you know that it's safe to remove it from the codebase.

Currently, this is quite a laborious process, as 'drush pm-list' needs to be run for each site.

With environment aliases, this would be a one liner:

drush @hypothetical-env-alias pm-list | ag some_module

('ag' is the very useful silver searcher unix command, which is almost the same as the also excellent 'ack' but faster, and both are much better than grep.)

While site aliases are fixed, they can be altered with Drush hooks. I considered that these might allow something to dynamically declare aliases, or a command option. There's an example of altering aliases with a hook in the Drush code.

In the meantime, a much simpler solution is to use xargs, which I have recently found is extremely useful in all sorts of situations. Because this allows you to run one command multiple times with a set of parameters, all you need to do is pass it a list of site aliases. Fortunately, the 'drush sa' command has lots of formatting options, and one of them gives us just what we need, a list of aliases with one on each line:

drush sa --format=list 

That gives us all the aliases, and we probably don't want that. So here's where ag first comes in to play, as we can filter the list, for example, to only run on live sites (I'm using my ACSF aliases here as an example):

drush sa --format=list| ag 01live

Now we have a filtered list of aliases, and we can feed that into xargs:

drush sa --format=list| ag 01live | xargs -I % drush % pm-list

Normally, xargs puts the input parameter at the end of its command, but here we want it inserted just after the 'drush' command. The -I parameter allows us to specify a placeholder where the input parameter goes, so:

xargs -I % drush % pm-list

says that we want the site name to go where the '%' is, and means that that xargs will run:

drush SITE-ALIAS pm-list

with each value it receives, in this case, each site alias.

Another thing we will do with xargs is set the -t parameter, which outputs each actual command it executes on STDERR. That acts as a heading in the output, so we can clearly see which site is outputting what.

Finally, we can use ag a second time to filter the module list down to just the module we want to find out about:

drush sa --format=list | ag live | xargs -t -I % drush % pml | ag some_module 

The nice thing about the -t parameter is that as it's STDERR, it's not affected by the final pipe to ag for filtering output. So the output will consist of the drush command for the site, followed by the filtered output.

And hey presto.

In conclusion: dynamic site aliases in Drush were nice, but the maintainers removed them (as far as I can gather) because they were a mess to implement, and removing them vastly simplified things. Doing the equivalent with xargs took a bit of figuring out, but once you know how to do it, it's actually a much more powerful way to work with multiple sites at once.

Unnatural file changes with git

When you check out a branch or commit with git, two things happen: git changes the files in the repository folder, and changes the file that tells it what is currently checked out. In git terms, it changes what HEAD points to. In practical terms, it updates the .git/HEAD text file to contain a different reference.

But these two operations can be done separate from one another, so that while the files correspond to one commit, git thinks that the HEAD commit is another one.

This of course puts your git repository in an unstable, even unnatural state. But there are useful reasons for doing this, and one of these comes up in the operation of dorgflow, my tool for working with git and drupal.org patches.

Dorgflow creates a local branch for a drupal.org issue, and creates a commit for each patch, so you end up with a nice sequential representation of the work that's been done so far.

But every patch that's uploaded to an issue is a patch against the master branch, so we can't just apply the patches one by one and make commits: the first patch will apply, and all the subsequent ones will fail.

What we need is for the files be in the same state as the master branch, so that the patch applies, but when we make a commit, it needs to be a new commit on the feature branch:

 * [Feature branch] Patch 1 <- Make patch 2 commit on top of this commit.
/
* [master branch] Latest master commit. <- Apply patch 2 to these files.

In git terms, we want the current files (the working tree, as git calls it) to be on the master branch, while git's HEAD is on the feature branch.

For my first attempt at getting this to work, I did the following:

  1. Put git on the feature branch.
  2. Check out the master branch's files, with git checkout master -- .

When you use git checkout command with a file or files specified, it doesn't move HEAD, but instead changes just those files to the given commit or branch. If you give '.' as the file to check out, then it does that for all of the repository's files. In effect, you get the files to look like the given commit, in our case the master branch.

This is what we want, but it has one crucial flaw. Suppose patch 1 added a new file, foo.php. When we check out master's files, foo.php is not changed, because it's not on master. There's nothing on master to overwrite it. The git checkout master -- . command doesn't actually say 'make all the files look like master', it says 'check out all the files from master'. foo.php isn't on master, and so it's simply left alone.

Suppose now that patch 2 also adds the foo.php file, which it most likely will, since in most cases, a newer patch incorporates all the work of the previous one.

Applying patch 2 to the files in their current state will fail, because patch 2 tries to create the file foo.php, but it's already there.

So this approach is no good. I was stuck with this problem with dorgflow for months, until I had a brainwave: instead of staying on the feature branch and changing the files to look like master, why not check out the master branch, but tell git that the HEAD is at the feature branch?

That solves the problem of the foo.php file: when you check out master, the foo.php that's at the patch 1 commit vanishes, because it doesn't exist on master. So applying patch 2 will be fine.

The only remaining question was how to you tell git it's on a different branch without doing a checkout of files? Turns out this is a simple plumbing command: git symbolic-ref HEAD refs/heads/BRANCH. This is just telling git to change the reference that HEAD points to, and that's how git stores the fact that a particular branch is the current one.

This was a simple change to make in dorgflow (it was actually more work to update the tests so the mocks had the correct method call expectations!).

This means that dorgflow accordingly now handles applying a sequence of issue patches that add files now works in the latest release.

Pages

Subscribe to Joachim's Drupal blog