Blog

A script for making patches

By joachim, Fri, 03/27/2015 - 13:46

I have a standard format for patchnames: 1234-99.project.brief-description.patch, where 1234 is the issue number and 99 is the (expected) comment number. However, it involves two copy-pastes: one for the issue number, taken from my browser, and one for the project name, taken from my command line prompt.

Some automation of this is clearly possible, especially as I usually name my git branches 1234-brief-description. More automation is less typing, and so in true XKCD condiment-passing style, I've now written that script, which you can find on github as dorgpatch. (The hardest part was thinking of a good name, and as you can see, in the end I gave up.)

Out of the components of the patch name, the issue number and description can be deduced from the current git branch, and the project from the current folder. For the comment number, a bit more work is needed: but drupal.org now has a public API, so a simple REST request to that gives us data about the issue node including the comment count.

So far, so good: we can generate the filename for a new patch. But really, the script should take care of doing the diff too. That's actually the trickiest part: figuring out which branch to diff against. It requires a bit of git branch wizardry to look at the branches that the current branch forks off from, and some regular expression matching to find one that looks like a Drupal development branch (i.e., 8.x-4.x, or 8.0.x). It's probably not perfect; I don't know if I accounted for a possibility such as 8.x-4.x branching off a 7.x-3.x which then has no further commits and so is also reachable from the feature branch.

The other thing this script can do is create a tests-only patch. These are useful, and generally advisable on drupal.org issues, to demonstrate that the test not only checks for the correct behaviour, but also fails for the problem that's being fixed. The script assumes that you have two branches: the one you're on, 1234-brief-description, and also one called 1234-tests, which contains only commits that change tests.

The git workflow to get to that point would be:

  1. Create the branch 1234-brief-description
  2. Make commits to fix the bug
  3. Create a branch 1234-tests
  4. Make commits to tests (I assume most people are like me, and write the tests after the fix)
  5. Move the string of commits that are only tests so they fork off at the same point as the feature branch: git rebase --onto 8.x-4.x 1234-brief-description 1234-tests
  6. Go back to 1234-brief-description and do: git merge 1234-tests, so the feature branch includes the tests.
  7. If you need to do further work on the tests, you can repeat with a temporary branch that you rebase onto the tip of 1234-tests. (Or you can cherry-pick the commits. Or do cherry-pick with git rev-list, which is a trick I discovered today.)

Next step will be having the script make an interdiff file, which is a task I find particularly fiddly.

A git-based patch workflow for drupal.org (with interdiffs for free!)

By joachim, Thu, 11/27/2014 - 08:39

There's been a lot of discussion about how we need github-like features on d.org. Will we get them? There's definitely many improvements in the pipeline to the way our issue queues work. Whether we actually need to replicate github is another debate (and my take on it is that I don't think we do).

In the meantime, I think that it's possible to have a good collaborative workflow with what we have right now on drupal.org, with just the issue queue and patches, and git local branches. Here's what I've gradually refined over the years. It's fast, it helps you keep track of things, and it makes the most of git's strengths.

A word on local branches

Git's killer feature, in my opinion, is local branches. Local branches allow you to keep work on different issues separate, and they allow you to experiment and backtrack. To get the most out of git, you should be making small, frequent commits.

Whenever I do a presentation on git, I ask for a show of hands of who's ever had to bounce on CMD-Z in their text editor because they broke something that was working five minutes ago. Commit often, and never have that problem again: my rule of thumb is to commit any time that your work has reached a state where if subsequent changes broke it, you'd be dismayed to lose it.

Starting work on an issue

My first step when I'm working on an issue is obviously:

  git pull

This gets the current branch (e.g. 7.x, 7.x-2.x) up to date. Then it's a good idea to reload your site and check it's all okay. If you've not worked on core or the contrib project in question in a while, then you might need to run update.php, in case new commits have added updates.

Now start a new local branch for the issue:

  git checkout -b 123456-foobar-is-broken

I like to prefix my branch name with the issue number, so I can always find the issue for a branch, and find my work in progress for an issue. A description after that is nice, and as git has bash autocompletion for branch names, it doesn't get in the way. Using the issue number also means that it's easy to see later on which branches I can delete to unclutter my local git checkout: if the issue has been fixed, the branch can be deleted!

So now I can go ahead and start making commits. Because a local branch is private to me, I can feel free to commit code that's a total mess. So something like:

  dpm($some_variable_I_needed_to_examine);
  /*
  // Commented-out earlier approach that didn't quite work right.
  $foo += $bar;
  */
  // Badly-formatted code that will need to be cleaned up.
  if($badly-formatted_code) { $arg++; }

That last bit illustrates an important point: commit code before cleaning up. I've lost count of the number of times that I've got it working, and cleaned up, and then broken it because I've accidentally removed an important line that was lost among the cruft. So as soon as code is working, I make a commit, usually whose message is something like 'TOUCH NOTHING IT WORKS!'. Then, start cleaning up: remove the commented-out bits, the false starts, the stray code that doesn't do anything, in small commits of course. (This is where you find it actually does, and breaks everything: but that doesn't matter, because you can just revert to a previous commit, or even use git bisect.)

Keeping up to date

Core (or the module you're working on) doesn't stay still. By the time you're ready to make a patch, it's likely that there'll be new commits on the main development branch (with core it's almost certain). And before you're ready, there may be commits that affect your ongoing work in some way: API changes, bug fixes that you no longer need to work around, and so on.

Once you've made sure there's no work currently uncommitted (either use git stash, or just commit it!), do:

git fetch
git rebase BRANCH

where BRANCH is the main development branch that is being committed to on drupal.org, such as 8.0.x, 7.x-2.x-dev, and so on.

(This is arguably one case where a local branch is easier to work with than a github-style forked repository.)

There's lots to read about rebasing elsewhere on the web, and some will say that rebasing is a terrible thing. It's not, when used correctly. It can cause merge conflicts, it's true. But here's another place where small, regular commits help you: small commits mean small conflicts, that shouldn't be too hard to resolve.

Making a patch

At some point, I'll have code I'm happy with (and I'll have made a bunch of commits whose log messages are 'clean-up' and 'formatting'), and I want to make a patch to post to the issue:

  git diff 7.x-1.x > 123456.PROJECT.foobar-is-broken.patch

Again, I use the issue number in the name of the patch. Tastes differ on this. I like the issue number to come first. This means it's easy to use autocomplete, and all patches are grouped together in my file manager and the sidebar of my text editor.

Reviewing and improving on a patch

Now suppose Alice comes along, reviews my patch, and wants to improve it. She should make her own local branch:

  git checkout -b 123456-foobar-is-broken

and download and apply my patch:

  wget PATCHURL
  patch -p1 < 123456.PROJECT.foobar-is-broken.patch

(Though I would hope she has a bash alias for 'patch -p1' like I do. The other thing to say about the above is that while wget is working at downloading the patch, there's usually enough time to double-click the name of the patch in its progress output and copy it to the clipboard so you don't have to type it at all.)

And finally commit it to her branch. I would suggest she uses a commit message that describes it thus:

  git commit -m "joachim's patch at comment #1"

(Though again, I would hope she uses a GUI for git, as it makes this sort of thing much easier.)

Alice can now make further commits in her local branch, and when she's happy with her work, make a patch the same way I did. She can also make an interdiff very easily, by doing a git diff against the commit that represents my patch.

Incorporating other people's changes to ongoing work

All simple so far. But now suppose I want to fix something else (patches can often bounce around like this, as it's great to have someone else to spot your mistakes and to take turns with). My branch looks like it did at my patch. Alice's patch is against the main branch (for the purposes of this example, 7.x-1.x).

What I want is a new commit on the tip of my local branch that says 'Alice's changes from comment #2'. What I need is for git to believe it's on my local branch, but for the project files to look like the 7.x-1.x branch. With git, there's nearly always a way:

  git checkout 7.x-1.x .

Note the dot at the end. This is the filename parameter to the checkout command, which tells git that rather than switch branches, you want to checkout just the given file(s) while staying on your current branch. And that the filename is a dot means we're doing that for the entire project. The branch remains unchanged, but all the files from 7.x-1.x are checked out.

I can now apply Alice's patch:

  wget PATCHURL
  patch -p1 < 123456.2.PROJECT.foobar-is-broken.patch

(Alice has put the comment ID after the issue ID in the patch filename.)

When I make a commit, the new commit goes on the tip of my local branch. The commit diff won't look like Alice's patch: it'll look like the difference between my patch and Alice's patch: effectively, an interdiff. I now make a commit for Alice's patch:

  git commit -m "Alice's patch at comment #2"

I can make more changes, then do a diff as before, post a patch, and work on the issue advances to another iteration.

Here's an example of my local branch for an issue on Migrate I've been working on recently. You can see where I made a bunch of commits to clean up the documentation to get ready to make a patch. Following that is a commit for the patch the module maintainer posted in response to mine. And following that are a few further tweaks that I made on top of the maintainer's patch, which I then of course posted as another patch.

A screenshot of a git GUI showing the tip of a local branch, with a commit for a patch from another user.

(Notice how in a local branch, I don't feel the need to type terribly accurately for my commit messages, or indeed be all that clear.)

Improving on our tools

Where next? I'm pretty happy with this workflow as it stands, though I think there's plenty of scope for making it easier with some git or bash aliases. In particular, applying Alice's patch is a little tricky. (Though the stumbling block there is that you need to know the name of the main development branch. Maybe pass the script the comment URL, and let it ask d.org what the branch of that issue is?)

Beyond that, I wonder if any changes can be made to the way git works on d.org. A sandbox per issue would replace the passing around of patch files: you'd still have your local branch, and merge in and push instead of posting a patch. But would we have one single branch for the issue's development, which then runs the risk of commit clashes, or start a new branch each time someone wants to share something, which adds complexity to merging? And finally, sandboxes with public branches mean that rebasing against the main project's development can't be done (or at least, not without everyone know how to handle the consequences). The alternative would be merging in, which isn't perfect either.

The key thing, for me, is to preserve (and improve) the way that so often on d.org, issues are not worked on by just one person. They're a ball that we take turns pushing forward (snowball, Sisyphean rock, take your pick depending on the issue!). That's our real strength as a community, and whatever changes we make to our toolset have to be made with the goal of supporting that.

Building Fast and Flexible Application UIs with Entity Operations

By joachim, Tue, 11/18/2014 - 13:38

Now I've finished the Big Monster Project of Doom that I've been on the last two years, I can talk more about some of the code that I wrote for it. I can also say what it was: it was a web application for activists to canvass the public for a certain recent national referendum (I'll let you guess which one).

One of the major modules I wrote was Entity Operations module. What began as a means to avoid repeating the same code each time I needed a new entity type soon became the workhorse for the whole application UI.

The initial idea was this: if you want a custom entity type, and you want a UI for adding, editing, and deleting entities (much like with nodes), then you have to build this all yourself: hook_menu() items, various entity callbacks, form builders (and validation and submit handlers) for the entity form and the delete confirmation form. (The Model module demonstrates this well.)

That's a lot of boilerplate code, where the only difference is the entity type's name, the base path where the entity UI sits, and the entity form builder itself (but even that can be generalized, as will be seen).

Faced with this and a project on which I knew from the start I was going to need a good handful of custom entities (for use with Microsoft Dynamics CRM, accessed with another custom module of mine, Remote Entity API), I undertook to build a framework that would take away all the repetition.

An Entity UI is thus built by declaring:

  • A base path (for nodes, this would be 'node'; we'll ignore the fact that in core, this path itself is a listing of content).
  • A list of subpaths to form the tabs, and the operation handler class for each one

With this in hand, why stop at just the entity view and edit tabs? The operation handlers can output static content or forms: they can output anything. One of the most powerful enhancements I made to this early on was to write an operations handler that outputs a view. It's the same idea as the EVA module.

So for the referendum canvassing application, I had a custom Campaign entity, that functioned as an Organic Group, and had as UI tabs several different views of members, views of contacts in the Campaign's geographic area, views of Campaign group content (such as tasks and contact lists), and so on.

This approach proved very flexible and quick. The group content entities were themselves also built with this, so that, for example, Contact List entities had operations for a user to book the entity, input data, and release it when done working on it. These were built with custom operation handlers specific to the Contact List entity, subclassing the generic form operation handler.

An unexpected bonus to all this was how easy it was to expose form-based operations to Views Bulk Operations and Services (as 'targeted actions' on the entity). This allowed the booking and release operations to work in bulk on views, and also to be done via a mobile app over Services.

A final piece of icing on the cake was the addition of alternative operation handlers for entity forms that provide just a generic bare bones form that invokes Field API to attach field widgets. With these, the amount of code needed to build a custom entity type is reduced to just three functions:

  • hook_entity_info(), to declare the entity type to Drupal core
  • hook_entity_operations_info(), to declare the operations that make up the UI
  • callback_entity_access(), which controls the access to the operations

The module has a few further tricks up its sleeve. If you're using user permissions for your entities, there's a helper function to use in your hook_permission(), which creates permissions out of all the operations (so: 'edit foobar entities', 'book foobar entities', 'discombobulate foobar entities' and so on). The entity URI callback that Drupal core requires you to have can be taken care of by a helper callback which uses the entity's base path definition. There's a form builder that lets you easily embed form-based operations into the entity build, so that you can put the sort of operations that are single buttons ('publish', 'book', etc) on the entity rather than in a tab. And finally, the links to operation tabs can be added to a view as fields, allowing a list of entities with links to view, edit, book, discombobulate, and so on.

So what started as a way to simplify and remove repetitive code became a system for building a whole entity-based UI, which ended up powering the whole of the application.

Graphing relationships between entity types

By joachim, Sun, 08/31/2014 - 21:00

Another thing that was developed as a result of my big Commerce project (see my previous blog post for the run-down of the various modules this contributed back to Drupal) was a bit of code for generating a graph that represents the relationships between entity types.

For a site with a lot of entityreference fields it's a good idea to draw diagrams before you get started, to figure out how everything ties together. But it's also nice to have a diagram that's based on what you've built, so you can compare it, and refer back to it (not to mention that it's a lot easier to read than my handwriting).

The code for this never got released; I tried various graph engines that work with Graph API, but none of them produced quite what I was hoping for. It just sat in my local copy of Field Tools for the last couple of years (I didn't even make a git branch for it, that shows how rough it was!). Then yesterday I came across the Sigma.js graph library, and that inspired me to dig out the code and finish it off.

To give the complete picture, I've added support for the relationships that are formed between entity types by their schema fields: things like the uid property on a node. These are easily picked out of hook_schema()'s foreign keys array.

In the end, I found Sigma.js wasn't the right fit: it looks very pretty, but it expects you to dictate the position of the nodes in the canvass, which for a generated graph doesn't really work. There is a plugin for it that allows the graph to be force-directed, but that was starting to be too fiddly. Instead though, I found Springy, that while maybe not quite as versatile, automatically lays out the graph nodes out of the box. It didn't take too long to write a library module for using Springy with Graph API.

Here's the result:

Graph showing relationships between entity types on a development Drupa site

Because this uses Graph API, it'll work with any graph engine, not just Springy. So I'll be interested to see what people who are more familiar with graphing can make it do. To get something that looks like the above for your site, it's simple: install the 7.x-1.x-dev release of Field Tools, install Graph API, install the Springy module, and follow the instructions in the README of that last module for installing the Springy Javascript library.

The next stage of development for this tool is figuring out a nice way of showing entity bundles. After all, entityreference fields are on specific bundles, and may point to only certain bundles. However, they sometimes point to all bundles of an entity type. And meanwhile, schema properties are always on all bundles and point to all bundles. How do we represent that without the graph turning into a total mess? I'm pondering adding a form that lets you pick which entity types should be shown as separate bundles, but it's starting to get complicated. If you have any thoughts on this, or other ways to improve this feature, please share them with me in the Field Tools issue queue!

Getting Module Builder ready for Drupal 8

By joachim, Tue, 08/19/2014 - 08:12

I've just made a commit to Module Builder that adds unit tests. This is a big deal, because having these frees me up to start making the big changes that are needed for supporting Drupal 8's new structures: routes, plugins, forms, and so on.

The biggest challenge is going to be the interface. Currently, you give Module Builder just a module name and a list of hook names, and it does the necessary. On the command line it's nice and simple:

drush mb mymodule install schema node_insert form_alter views_data_alter

The first parameter is the module name, and everything that follows is a hook name. Now we add to the mix requests such as a form called MyModuleCakeToppingForm, or an entity type plugin, or a route bake_my_cake and its page controller. How to elegantly specify all that over the command line, without making it horribly unwieldy and impossible to remember how to use?

It's also going to be an interesting exercise in reading my own documentation and seeing how much sense it makes after something like 7 months away from the code.

From what I recall, Module Builder uses a hierarchy of component generators to build your module. Taking our example above, the first thing that happens is that the Module generator class kicks in. 'So, you want a module, do you?' it asks, 'You'll need some of these.' And it begins to assemble a list of further generators, for the components it needs: an info file, and the hooks generator. The hooks generator does the actual job of examining your list of requested hooks, and decides based on that that you need three code files: a .module, a .install, and a .views.inc. So by now we have a tree of generators like this:

- Module
-- Info file
-- Hooks
--- Code file: .module
--- Code file: .install
--- Code file: .views.inc

This is not a class hierarchy; this is a tree of objects where each generator has a list of the generators beneath it, and is responsible for collecting data from them. Once we have the tree, we iteratively have each generator assemble the data it wants to contribute, starting with the Module generator at the top.

The original plan when I wrote this system was to make the smallest granularity be a file. The leaves of the generator tree would assemble the text for their file's contents, and the Module generator would collect the files up and return them to the caller for output (either in the UI, or to write them directly).

However, while the original intention of this system was that it could be generalised to base components other than modules (so profiles and themes, which are both supported to some extend but lack the UI, see above!), it's also proven to be extendable downwards to smaller components, and to be worthwhile to do so.

Enter the Form generator. Once we have a generic Function generator (and its child class the HookImplementation), we can create a Form generator. Given a form machine name, 'foo_form', it simply knows to add three copies of the Function generator: 'foo_form', 'foo_form_validate', 'foo_form_submit', along with the correct parameters and some boiler plate code.

And we can specialize this further: the AdminSettingsForm simply extends the Form generator, and adds a menu item component, which itself ensures hook_menu() is requested.

At this point it starts to get a bit complicated, as we have components that request other components that are in totally different parts of the component tree. That's the point at which I think I was when I realized I needed tests so that I can refactor and clean up the messy bits of this, and enhance and extend it, without breaking what's already there.

So that's the current state of Module Builder: not yet ready for Drupal 8, but has lots of potential. At this point, I'd really welcome input on the Drush interface, as that's the big quandary. And any input on new Drupal 8 component generators would be great too; there are a few open issues in the queue. And finally, Module Builder is a complex beast; should anyone looking at the code find it baffling and impenetrable, do please file a documentation issue to highlight the problem and request clarification.