1/4: Introduction
A Google search for something like "shelxl wght" or "shelxl wght problems" returns many links that mention the WGHT command in SHELXL. Most of these, however, merely parrot the contents of the SHELXL manual. Few, if any, give specific information on the cause of weird WGHT parameters, much less any useful advice on how to deal with them. The aim of this short tutorial is to cover one easy way to deal with the problem of too high of a b parameter in the optimized weighting scheme. Before getting to that though, a brief introduction to the WGHT command and its various options is warranted. None of this first part is new of course, it is just another regurgitation of the SHELXL manual, to wit:
From the SHELXL manual, the weighting scheme takes the following form:
Weights are applied via the WGHT command, which has six adjustable parameters:
Here, the values given in square brackets are defaults. The value of P is set according to:
and is used to reduce bias (Wilson, 1976). The value of q is 1 if c = 0, but if c is set to a non-zero value, the effect is to upweight the higher angle data, which can be useful for finding hydrogen atoms in difference maps etc. The mode of action is different for positive vs. negative c. The SHELXL manual goes into more detail for specialized weighting schemes, but for most structures, WGHT requires just a and b (and sometimes only a). For a high-quality structure with reliable diffraction data, we expect a < 0.1 and b < 2 (or so). After each round of refinement, SHELXL suggests a new WGHT with a and b optimized to flatten the analysis of variance. These a and b parameters are sometimes larger than hoped for. Large a values for example, can be indicative of weak data. Large values of b are trickier to interpret but can be due to model deficiencies or to problems with the reflection standard uncertainties [‘su ’ or σ(Fo2)]. Most small-molecule crystallographers will have encountered a checkCIF alert concerning a large b parameter at some point, i.e., something like this:
The second parameter on the SHELXL weighting line has an exceptionally large
value. This may indicate either improper reflection s.u.s or an unresolved
problem such as missed twinning.
Assuming there are no serious model deficiencies, can we get better estimates for σ(Fo2)? The best stage for that is during data reduction. For Bruker data, σ(Fo2) are calculated by SADABS (Krause et al., 2015) using information from the integration program SAINT (nowadays, both usually run from within the APEX gui). Other manufacturers have their own programs (e.g., CrysAlis Pro from Rigaku and X-RED from Stoe) with analogous procedures, so the following could probably be adapted for other architectures. Any scheme to improve σ(Fo2) values in SADABS will entail changing its defaults, not all of which can be accessed in the APEX gui, so we’ll need the command-line SADABS. First though, let’s scrutinize the σ(Fo2) values; are they too large, too small, or something else?
In SHELXL, the data are input via the HKLF instruction, typically:
The full form of the HKLF command, however:
allows some manipulation of the data as it is read in. Here, N sets the data format, S scales both Fo2 and σ(Fo2), r11…r33 is a 3x3 transformation matrix, sm scales just the σ(Fo2), and m is for compatibility with ‘condensed data’ (ancient and obsolete). For our purposes, the important parameter is sm. For example, if sm = 0.5, then all σ(Fo2) will be halved, whereas sm = 2 will double all the σ(Fo2). Thus, a quick test can tell us if the σ(Fo2) are too small or too large. This is best illustrated by an example, which we'll get to in Part 2.