Blog

Git tricks: being on the wrong branch

By joachim, Fri, 07/06/2012 - 08:30

I often find that I'm in the middle of one thing when I have to do another. Whether it's hotfixes for a client, or just finding a minor bug that blocks my current work, or needing to add components to a feature before I can add custom functionality.

The best way is to stash your current work, checkout the master branch, commit, then go back. If you're working on a feature branch (and you should be), then rebase that afterwards so you have access to the new work there. So that's:

$ git stash
$ git checkout master
// do commits
$ git checkout feature
$ git rebase master
$ git stash pop

But that's not always feasible. Sometimes I'm sloppy, and I've already made code changes before stashing. And lately, I've got one instance of Party module that's got a feature branch that's made database changes, but I don't want to hold that up ongoing commits (and I'm too lazy to set up a new local site!).

If your fix is just one commit, you can make it on the feature branch, then cherrypick it to the master branch like this:

// make your commit and note its SHA
$ git stash
$ git checkout master
$ git cherry-pick COMMIT
$ git checkout feature
$ git rebase master

The rebase should be smart enough to figure out the same commit exists on both branches, and will silently drop it from the feature branch.

Alternatively, if you want to do a chunk of main branch work, make a temporary branch on the tip of feature, which you can then move to the master branch when you're done:

$ git checkout -b moveme
// make as many commits as you like
// Now we take everything that's between the tips of feature and moveme, and move it to the tip of master
$ git rebase --onto master feature moveme
// Now merge moveme into master: this'll just fast-foward master.
$ git checkout master
$ git merge moveme
// moveme can be deleted now
$ git branch -d moveme
// Now rebase the feature branch
$ git checkout feature
$ git rebase master

As I've become more familiar with git, I've found that temporary, throw-away branches can be useful in a variety of situations. Another one is making a backup branch prior to potentially messy rebases: just create a branch 'backup' where you are to be sure that no matter what happens with the rebase, your current chain of commits will be preserved. If there are conflicts, you can diff against it to check no code was lost. And when you're happy with the result of the rebase, just delete it. Branches being cheap, and local, opens up a whole new set of uses for them.

Get out, git!

By joachim, Thu, 05/03/2012 - 07:41

There are lots of good reasons to have your server's codebase be an actual git checkout. But there's one potential flaw: your entire repository's history ends up in your webroot inside a .git folder.

You can block access to it in your .htaccess, but that's hacking core (until this patch lands at least).

There is however an alternative method that lets you keep the entirety of git's working folder outside the webroot completely.

Here's how to convert an existing repository to this format:

  1. Move the .git folder to another location, renaming it in the process so it's no longer hidden. The convention is to leave it with a .git ending though, so for example, 'mysite.git'. I put these inside a folder called 'git' in the user's home folder, for instance.
    $ mv .git ~/git/mysite.git
  2. In its original place in your webroot, create a new file called '.git'. Into this file place a single line thus:
    gitdir: /absolute/path/to/your/mysite.git

    This needs to be an absolute path; relative ones confuse git when you go into subfolders. Using '~' to start at the user's home folder doesn't seem to work either.

  3. Finally, we need to tell the config file where the work folder is. This step isn't completely necessary, but it allows you to invoke the git command while standing in subfolders of your webroot, which is too handy a thing to lose.

    Standing either in the webroot or in the git folder, do:

    $ git config core.worktree "/absolute/path/to/your/webroot"

    You can also edit the git config file by hand to set this, which allows you to also add a comment explaining the manoeuvre for future reference.

That's all there is to it. You now have a working git repository whose working folder is completely inaccessible from the outside world.

For creating a new repo, you can use the following finger-twister:

$ git --git-dir=/path/to/repo.git --work-tree=. init && echo "gitdir: /path/to/repo.git" > .git

There are more tips in this question on StackOverflow. And for a hands-on tutorial, come to my session on git at DrupalCamp Scotland, taking place later this month in Edinburgh.

Moving a git local branch from one local to another

By joachim, Fri, 02/24/2012 - 10:32

You're maybe at one of the many Drupal Co-worker Friday events that are taking place around the world today. You've packed up your laptop and your lunchbox, and you're looking forward to a day out of the house with some human contact.

But yesterday you were halfway through a big piece of work on your project. And you were using a git local branch, of course. Why? Because it keeps your work isolated off the main development branch, allowing other work to continue independently. And because while your commits are only local you're free to reorder them, edit the log messages, fixup mistakes as if they never happened, and so on. In fact, if you so choose, merging your local branch in with the --squash option makes it look like you made all of the work in a single, perfect commit. Wow!

But that branch is on your desktop machine, and today you're on the laptop. How to get it over from one to the other? You don't want to push it to the remote, because then that means you can't rework it, and it doesn't really belong there anyway. You could make one big patch of your branch against the development branch, but then on your laptop you won't have the dozen or so commits you made yesterday (and you work with small, simple commits, because it makes it easier to roll back should you need to, and to see what's been changed). Furthermore, you've a bunch of changes that aren't yet committed because they're work in progress. You need those too, but not mixed in with what's already committed and reasonably stable.

The solution is the git format-patch command. At first try this is a rather weird one, which fills your folder up with cryptic mailbox files (which as far as I can tell are nigh on useless in the modern world of webmail). But it has an option which turns it into something very useful indeed: --stout. So much so that I've added it to my git global config thus:

  fp  = format-patch --stdout

So standing in your repository and doing

  git fp the-dev-branch > my-local-branch.patch

will give you one single file that comprises multiple patches, one for each commit. Copy that to your laptop by your favourite means, and over there do:

  git co -b my-local-branch
  git am  my-local-branch.patch

And all your commits from your desktop machine are reproduced on your laptop.

But what about your work in progress? I was coming to that. Before you begin, make one commit of all of them, perhaps with a log message to remind you it's the work in progress.

Then after you've done 'git am', all we have to do is kill off that final tip commit, while keeping the changes it contains in the filesystem. The command for that is git reset with the --mixed option:

  git reset --mixed HEAD^

All your work in progress changes are now 'unstaged changes', and your laptop's copy of the repository is in exactly the same stage as your desktop machine.

What about going back the other way? Well I'll figure that one out tonight, I expect ;)

PS. The return trip is easily done thus:

When you create the branch on the new machine, but before you apply the patch, add a tag: git tag mytag. Then at the end of the day, do the original process in reverse but take your diff from the tab: git fp mytag > homeward.patch

And on your home machine, remember to kill the 'work in progress' commit before you apply the patch, with 'git reset --hard HEAD^'. That's a hard reset this time, since the changes in that commit are in the new commits you're bringing back.

Hope your Drupal Coworker Friday was as productive as mine!

A concept for limiting taxonomy terms by common fields

By joachim, Wed, 11/23/2011 - 13:20

There are several modules that provide taxonomy term widgets that are more efficient at drilling down into large and complex vocabularies of terms, but I've not yet found something that just limits terms in some way.

The use case is somewhat like this: I have products that are classified by sport and by team. And teams can be rugby teams or football teams, and so on. Having one vocabulary per type of team feels somewhat weak, but putting them all in one vocabulary means the user has to wade through a lot of irrelevant terms.

So I've been thinking of ways to limit the terms in the field widget. I don't think this is something I'm going to develop (for reasons that will become clear), but it's just a wacky idea[*] I'm putting out there.

I've discarded my initial thought to use a hierarchy of terms, because that would result in terms that are essentially empty: the group of football team terms is not correctly a team term. It would also mean needing to prevent its selection in the widget, and its appearance in filter forms.

What I've landed on as a concept instead seems wacky but I think makes a lot of sense: now we can have fields on anything, we can match them. What I mean by this is that we compare the value in the 'sport' field on the product, and the value in the identical sport field applied to the terms. (Yes, taxonomy terms on taxonomy terms. It was bound to happen eventually, right?)

In terms of IA, this is actually quite simple and logical: add a term reference field to the team vocabulary, and add the sport field, which is already on products, to the team terms (which the client will be doing; I have no idea what teams belong to what sport).

In terms of the work the UI has to do, it's not that complex either once it's laid out. It goes like this: when the value in the sport field changes, make an ajax call on the team field. In the callback, load all the terms. For each term, look at its team value. Does that match the current value of the sport field in the product form? If so, let the term into the new options array. If not, discard it.

It's simple, but I've hit a problem: we have two really good JavaScript systems in Drupal 7 that work very well individually, but as far as I can tell live in completely different universes. The states system is great for telling one form element to show, hide, or change its value depending on the value of another element. The ajax system is great for telling one form element to react to a change in itself by updating something via ajax. What I don't see how to do is get the term reference widget element to watch for changes in other elements (like the states system does), and then update itself with ajax (using the ajax system ideally).

So that's where I've hit the buffers: my JavaScript skills are pretty minimal, and so that's where I leave this latest wacky idea for now. I've got proof-of-concept code of a term reference widget updating itself via ajax based on form values, but the missing piece is getting it to react to changes in another field, whose form elements can't be altered in PHP (because the term reference widget is being changed in hook_field_widget_form_alter(), which only sees that form element). I'd be grateful for any hints; I actually think this model is not as wacky as it first seems and could have a lot of applications.

[*] Back at DrupalCon London I chanced upon a conversation webchick and stella were having about Coder review automation on Drupal.org, and prefixed a suggestion with, 'I've got a wacky idea...', to which webchick said, 'When are your ideas not wacky?' I don't know whether it was just an off the cuff quip, or whether I really do come up with a lot of crazy stuff. I guess I can live with that reputation ;)

Dynamically changing Views table joins

By joachim, Fri, 10/28/2011 - 08:52

I've recently had cause to make Views make joins to tables in peculiar ways. Here's some notes on the peculiar things I did with the views_join class to accomplish that.

First of all I'll briefly recap how we define a table to Views. Each item in the $data array returned to hook_view_data() represents all the information about a table. Each key in the array is a field on that table (well, or pseudofield), except for the 'table' key which has the basic data about our table, like this:

$data['my_table'] = array(
  // This defines how the table joins back to different bases.
  'table' => array(
    // How to join back to the base table 'crm_party'.
    'crm_party' => array(
      'left_field' => 'pid',
      'field' => 'pid',
    ),
  ),
);
// Now we can add field definitions on this table.

That's the simplest case. It says, 'to join back to {crm_party}, join on the column 'pid' on both tables'. (Note I will say 'column' when I am speaking of the database, and 'field' for Views, though that can mean both a field that you add to the view, and a field on the table that provides filters, sorts, or arguments.)

So adding a field on this table will cause Views to add this join clause to the query:

... JOIN my_table ON crm_party.pid = my_table.pid

We can easily join where the columns have different names, by giving different values for 'field' and 'left_field' in the table definition.

If the join requires conditions, that's where the 'extra' clause comes in, like this:

// How to join back to the base table 'crm_party'.
'crm_party' => array(
  'left_field' => 'pid',
  'field' => 'pid',
  'extra' => 'foo = 42',
),

This now gives us:

... JOIN my_table ON crm_party.pid = my_table.pid AND foo = 42

The 'extra' can also take an array, in which case each item is an array containing field, operator, and value. (If it seems a bit like the Database API, but not quite, that's because all this was introduced in Views 2 on Drupal 6).

// How to join back to the base table 'crm_party'.
'crm_party' => array(
  'left_field' => 'pid',
  'field' => 'pid',
  'extra' => array(
    // The 'extra' array is numeric, hence has no keys. This always looks odd to me!
    array(
      'field' => 'foo',
      'value' => 42,
      'numeric' => TRUE,
    ),
  ),
),

So far, this is all covered in the Advanced Help documentation contained within Views. But for our relationship handler from CRM Parties to attached entities, we needed a condition on the join depending on values selected in the UI. So the 'extra', defined in hook_views_alter(), won't do, as it's not changeable. Or is it?

When a relationship handler is adding itself to the query, the query hasn't been fully built yet. Rather, Views has a views_plugin_query_default object which will eventually be used to make a DatabaseAPI SelectQuery. This means we can actually reach into the table queue and change the definition for any table to the left of us, like this:

// Our relationship handler's query method:
function query() {
  // Call our parent query method to set up all our tables and joins.
  parent::query();

  if ($this->options['main']) {
    // This is a little weird.
    // We don't add an 'extra' (ie a further join condition) on our
    // relationship join, but rather on the join that got us here from
    // the {crm_party} table.
    // This means reaching into the query object's table queue and fiddling
    // with the join object.
    // Setting a join handler for the join definition is not useful, as that
    // would have no knowledge of the user option set in this relationship
    // handler.
    // @todo: It might however be cleaner to set one anyway and give it
    // a method to add the extra rather than hack the object directly...
    $table = $this->table;
    $base_join = $this->query->table_queue[$table]['join'];
    $base_join->extra = array(array('field' => 'main', 'value' => TRUE));
  }

When I wrote those comments in the code last week, I'd found that using a custom join handler isn't useful, because that has no knowledge of the relationship handler's data. However, this week I found myself working on a different case where I did need to find a way to do just that.

This week's problem was how to filter out Drupal Commerce products that are in the current cart, or more generally in any order (and the current cart's order ID can be supplied with a default argument plugin).

It seems a reasonable enough thing to ask of Views, but it's actually pretty complex, as what's in a cart or order is not products but line items, each of which refers to a product with a reference field.

After several failed attempts, I managed to write a query that produces the correct result, but it requires joining to a subquery which itself has the order ID within it (see the issue for gory details).

The first hurdle with this is easy to overcome: there's nothing wrong about telling Views about a table that doesn't exist. This is often done with aliased tables, but in fact it can be totally fictional provided we also provide our own join handler which understands what to do to the query. That can be anything, as long as the SELECT fields we also add make sense. Hence it's fine to do this in hook_views_data():

// Fake table for the 'product is in order' argument, made from a subquery.
$data['commerce_product_commerce_line_item'] = array(
  'table' => array(
    'group' => 'Commerce Product',
    'join' => array(
      // Join to the commerce_product base.
      'commerce_product' => array(
        'left_field' => 'entity_id',
        'field' => 'line_item_id',
        'handler' => 'views_join_commerce_product_line_item',
      ),
    ),
  ),
);

Our custom join class now has to add the subquery to the view, but it also needs the argument value to do this.

The way I worked around this was to override the ensure_my_table() method in the argument handler. Normally, this calls $this->query->ensure_table() which then creates the join, but ensure_table() can take a join parameter to work with. The overridden version of ensure_my_table() creates the join object, and sets the argument value on it:

function ensure_my_table() {
  // Pre-empt views_plugin_query_default::ensure_table() by setting our join up now.
  // Argh, hack this in for now. This may mean relationships using this break?
  $relationship = 'commerce_product';

  // Get a join object for our table.
  // This is of class views_join_commerce_product_line_item, which takes
  // care of joining to a subquery rather than a table.
  $join = $this->query->get_join_data($this->table, $this->query->relationships[$relationship]['base']);

  // We add the argument value to the join handler as it needs to use it
  // within its subquery.
  $join->argument = $this->argument;

This means that in the build_join() method for our custom join handler, views_join_commerce_product_line_item, we can rely on the argument value that the views has received:

function build_join($select_query, $table, $view_query) {
  // (snip...) build a SelectQuery object for the subquery
  // Set the condition based on the argument value.
  $subquery->condition('cli.order_id', $this->argument);
  // (snip...)
  // Add the join on the subquery.
  $select_query->addJoin($this->type, $subquery, $table['alias'], $condition);

The views_join class's build_join() method is where the Views system of building a query is translated into a DatabaseAPI SelectQuery object. Here we build up our own query (and we don't need Views-safe aliases, as it's a completely internal, non-correlated subquery), and pass it in as a join which uses it as a subquery.

It remains only for the argument handler's query() method to add the conditions for its field, using the alias and field names we gave for the subquery.

In conclusion, the data structure Views understands may appear to be a fixed, declared thing, but with a little bit of tweaking the way tables are joined in a Views query can be affected by both site configuration and user input.