Printer-friendly versionSend by emailPDF version

cc. Reviewing the misguided and inaccurate data informing the Independent Review of Election Commission’s (IREC) Kriegler report, Kenyans for Peace, Truth and Justice (KPTJ) offers its conclusions on the statistical inadequacies that have precluded the drawing of a definitive picture of electoral fraud. Without an effective research design to establish where and why vote counting inaccuracies developed, KPTJ argues that the IREC’s inferring of ‘materially defective’ results has failed to add anything meaningful to what Kenyans already know about what went wrong with the election process.

The mandate of the Independent Review of Election Commission (IREC) was to answer a key question: who won the presidential elections? At the very least, Kenyans hoped that the commission would uncover whether or not intentional fraud was perpetrated. And how, and by whom.

A doctor diagnoses disease by observing the symptoms of a sick patient. The commission was expected to examine a large body of primary evidence in detail, looking for patterns that suggested the causes of the failed election process.

Unfortunately, the commission’s approach failed to meet the challenge confronting them.
This failure was rooted in two key aspects of how its investigation was carried out.

First, the commission either did not know of, or chose not to employ, the right tools for a forensic analysis of elections. Numerical skills are neither new nor untested in resolving electoral disputes.

Second, the commission did not think critically about why and how fraud could have been committed at the Kenyatta International Conference Centre (KICC). As a result, the methods it employed to investigate the existence or non-existence of electoral fraud at KICC were wanting. Here are the main problems with those methods:

1. The Kriegler investigation – specifically its analysis of the numbers – was not sufficient to enable it to draw the conclusion that all of the Electoral Commission of Kenya’s (ECK) results were wrong.
2. The commission failed to apply modern methods in attempting to understand the ECK’s numbers, thus excluding potentially important evidence that the commissioners and the Kenyan people could use to form an opinion.
3. The decision to examine only 19 problematic constituencies ensured that the commission could not determine whether rigging occurred at KICC.
4. Given its resources and the time on its hands, the commission could have conducted research capable of answering the questions at hand. It appears that it did not know about the proper methods to use, or deliberately chose not to use an effective analytical strategy.

A. SAMPLING PROBLEMS

After looking at the results of 19 constituencies – chosen because of complaints about them – the commission concluded that one could not rely on any figures from the ECK. This claim implies that the rest of the constituency results are equally flawed. It cannot be supported with the methods that the commission used, and demonstrates a basic misunderstanding of simple concepts, like random sampling, as well as an ignorance of more sophisticated statistical election forensics.

The commission’s logic for not employing electoral statistics was then based on presumptions of ECK inaccuracy. The IREC stated that ‘[a]lmost all parliamentary and presidential election results for the constituencies sampled are erroneous, which means that very few of the officially published figures are actually accurate.’ The claim here is that because the ECK’s results in the 19 ‘sampled’ constituencies contained many errors, most of the other 191 constituencies contain similar errors. This claim is false. The mistake the commission made lies in the way it chose its 19-constituency sample. Because the IREC’s sample focuses on disputed constituencies rather than a random sample of all constituencies, its findings from that sample cannot be generalised to all 210 constituencies.

To illustrate, suppose a farmer has 100 chickens, and he wants to estimate about how much he will earn if he takes them to market and sells them. He knows that different sized chickens fetch different prices, and he is not sure how many large and how many small birds he has. Our farmer decides that he is going to catch 10 chickens, and use those 10 to generalise about the larger flock. He does so, and finds that the 10 birds he caught are rather lean, with little meat on their bones. Crest-fallen, he sends his wife with the chickens to market, telling her not to expect too much given the sorry state of their flock. That evening, she returns, her purse bulging with money, and tells him that they did quite well at the market with all the large chickens.

What had the farmer done wrong when estimating the value of his flock? His ‘sample’ contained the weaker, sicker chickens, since they were easier to catch than healthy chickens. As a result, his estimate of the nature of the flock was not accurate, and he should not have concluded that his flock was full of small birds.

The choice to focus on problematic constituencies is understandable, but it precluded the possibility of drawing general conclusions about the results in all 210 constituencies.
As a result, we cannot conclude, as the IREC does, that ‘very few of the officially published figures are actually accurate’.

Demonstrating that the IREC’s sample is not representative of the 210 constituencies is quite easy. If its sample were representative of all constituencies, then the average of constituency-level characteristics in the sample – give and take the standard deviation – would be very similar to the averages of those variables for all 210 constituencies.

This is simply not the case. In the IREC’s sample, Kibaki received, on average 62 per cent of the vote, whereas his average for all constituencies was only 41 per cent. In the IREC’s sample, constituencies tended to have much lower population densities, and many more registered voters. Likeiwse, in the IREC’s sample, the average percentage difference between presidential and parliamentary votes cast is 7.1 per cent, while the average over all constituencies is a mere 2.5 per cent.

B. CURSORY DISMISSAL OF STATISTICAL MODELLING

The commission claims that because ‘the official ECK election results (published on the website and elsewhere) have not been cleaned of mistakes of a purely arithmetical nature’, they should not be analysed. The decision not to use any more advanced statistical tests on the election results, believing them to be all faulty, was clearly wrong. Statistical tools exist to deal with messy, problematic numbers like those produced by the ECK. Thus, the commission missed an important opportunity to investigate the results for fraud in a more detailed manner.

A number of statistical methods have been developed to assess this kind of data, even when it is as unreliable as the ECK’s is. The aim of such methods is to reduce the influence of anomalous data-points when estimating a statistical relationship between different variables, and to uncover the true relationship between variables, and thus separate anomalous data points from ‘normal’ data points. By achieving these two goals, these methods can enable one to estimate the actual relationship between two variables (e.g., parliamentary vote counts and presidential vote counts), while filtering out the impact of data points containing gross errors.

Statisticians and political scientists have applied these methods to electoral data in the past. Recently, scholars have developed a method that identifies abnormal votes for a third-party candidate in the 2000 US presidential election. The IREC did not attempt to apply this family of methods to the Kenyan electoral data. Moreover, the Electoral Commission still has not released the polling centre level results needed for such an analysis.

C. FAILURE TO EXAMINE STATUTORY FORMS USING APPROPRIATE STATISTICAL TESTS

The commission claims that because there were many allegations of changes in statutory forms, statistical tests could not be used to catch the culprits. Yet statistical tests exist that can detect electoral fraud resulting from changes made in statutory forms. Unlike other approaches that rely on assumptions about past or concurrent voting behaviour, or set arbitrary thresholds for ‘unlikely’ voting behaviour (like high turnout), these tests rely on the tried-and-true patterns that appear in numbers like vote counts. If an official commits electoral fraud by changing a candidate’s vote count on a statutory form, these tests are likely to detect deviations from that pattern. The IREC appears not to have considered these methods nor applied them to data from the 1,702 polling centres that they examined in detail.

Scholars have used these techniques on electoral data from Sweden and Nigeria, and found very little evidence of fraud in the former and significant evidence in the latter. In the process of its investigations, the IREC examined in detail the 17A forms, which contain numbers such as those these scholars have examined. The numbers were not subjected to statistical tests, tests which could have helped differentiate results arising from a normal electoral process, results arising from simple, unintentional ‘human error’ – like those resulting from mis-transcription or incorrect arithmetic – and results arising from intentional falsification. Unfortunately, even though they examined data from 19 constituencies in detail, the IREC did not apply these simple tests to the results.

Given the IREC’s reluctance to rely on any source of data as an ‘objective’ benchmark against which to compare numbers reported by the ECK or other parties, one would have hoped that they would employ ‘industry-standard’ electoral forensics. They could do this by relying on well-established statistical facts like Benford’s Law to examine the veracity of vote counts and turnout numbers at the polling stations and form 17A levels. This kind of statistical evidence, combined with an examination of the inconsistent nature in which many statutory forms (specifically forms 16A and 17A) were filled out, would have provided a much clearer differentiation between fraud and incompetence.

D. IMPROPER RESEARCH DESIGN RELATIVE TO THE MANDATE

For the IREC to make effective recommendations on how to reform Kenya’s electoral system and processes, the commission needed to establish where and why vote counting went wrong. The finding that the results were ‘materially defective’ adds nothing to what Kenyans already know about what went wrong with the ECK, and provides no advantage in terms of what reforms make the most sense. Without trying to find the truth about what went wrong and why, the IREC cannot diagnose the specific problems with the ECK.

At least two problems plague attempts to detect electoral fraud. First, differentiating between ‘human error’ and intentional fraud can be a difficult task. Using several types of evidence on the same area or polling station, however, can go a long way towards telling one from the other.

Second, fraud can occur at many different levels, either independently or simultaneously.
For instance, a presiding officer at the polling station might falsify electoral returns submitted on a form 16A. A returning officer might do something similar on the constituency-level form 17A. And a supervisor at KICC might adjust votes between constituencies at the province-level. A suitable research design should be able tell the difference between an honest mistake and intentional fraud, as well as differentiate fraud on one level from that on another.

These requirements for a suitable research design have a practical purpose. Even if we accept the IREC’s assertion that figuring out who won is not in its mandate, without a suitable research design, the IREC would not be able to fulfil another key part of its mandate: to make substantive recommendations on the reform of the Electoral Commission of Kenya.

Because it did not develop a convincing approach to understanding what problems occurred where and at what level during the elections, the IREC could not effectively differentiate human error from attempts at fraud, nor locate either of these phenomena at the polling-station, constituency, or national level. As a result, Kenyans received a report telling them much of what they already know: that the elections contained many problems, including bribery, vote-buying, intimidation, and the like.

The IREC’s errors in research design may lie at the root of its unnecessarily vague findings. Was the IREC’s statistical research design capable of deducing whether or not there was rigging at the KICC, or at any other level, for that matter? To do so, its design would have to achieve two goals. First, it would have to differentiate between human error and fraud. In the report, human error is generally associated with a stressful and complex voting environment.

However, these claims are simply theories. If difficult voting environments caused more discrepancies, then discrepancies should be correlated with factors we think cause ‘difficult voting environments.’ The IREC did not examine these theories using even the most basic statistical tools at the polling station or constituency level.

The second flaw in the IREC’s research design lies in its inability to attribute errors – fraudulent or otherwise – to a specific point in the counting process. Given the IREC’s reluctance to believe analyses based on ECK data, it seems odd that its ‘analysis relied only on official documents and results submitted to IREC by ECK’.

One could argue that, since the ECK may have felt threatened by the IREC’s mandate, documents coming from the ECK could have been manipulated to aid in its absolution with respect to fraud. We have no evidence of this hypothesis. However, if the IREC finds other analyses using ECK data unconvincing, why should the IREC’s own analysis of documents that had been in the possession of the ECK since the elections be credible?

Speculation aside, if we assume that the documents provided to the IREC by the ECK are genuine, could its analysis determine whether or not fraud took place at the KICC? Again, the answer appears to be ‘no’. A basic point of departure for many criminal investigations is ‘cui bono?’: who benefits? Unfortunately, the IREC’s unorthodox sampling procedures prevent any meaningful inference about who may have benefited from the changes made at KICC, such as for example, the differences between the results on form 16 and the official ECK final results.

Moreover, the IREC did not recognise the ECK’s opportunity to commit a kind of fraud at the KICC uniquely different from fraud occurring at lower levels. Only at the national tallying centre could a coordinated transfer of votes between constituencies, into rejected votes, or between candidates have been carried out. In order to detect such subtle changes, the IREC would have had to have examined the results of an entire province or even multiple provinces, a task they were clearly unwilling to undertake.

DODGING THE TOUGH TASKS

Even before the IREC was set up, Kenyans for Peace, Truth and Justice (KPTJ) raised four concerns with regard to the 2007 elections:

1) Anomalies in election results documents;
2) Discrepancies between official results and those published by the media;
3) Suspiciously high voter turnout; and
4) Discrepancies between presidential, parliamentary and civic vote totals.

The main problems with the IREC report arise from two connected issues: anomalies in Form 16A and discrepancies between presidential, parliamentary, and civic voter turnout. There was no standard way of filling in these documents. Some were hand written, others were typed. Some had the totals crossed out. Some had the returning officer’s stamp, others did not.

Results announced by Kenya Television Network (KTN) in almost half of the constituencies – 93 out of 210 – differed from those announced by the ECK. KTN’s figures are the closest to the most complete media record. Nation Media Group’s results database, together with its backup, inexplicably crashed and lost the result. The differences between KTN and ECK results total 208,208, with all three major presidential candidates registering both gains and losses.

Using the 2002 general election as a benchmark, the average voter turnout is 70.7 per cent, and this could swing either way by 12.4 percentage points. This gives a maximum of 83.05 per cent and a minimum of 58.29 per cent as ‘normal’ turnout.

In Coast and Nairobi provinces, constituencies registering under 50 per cent voter turnout – that is, unusually low – give a total of 14,242 possibly subtracted votes. In Central, Nyanza and Rift Valley provinces, however, constituencies registering over 80 per cent voter turnout – that is, unusually high – give a total of 150,212 possibly added votes.

Using the 1992, 1997 and 2002 general elections as benchmarks, any variances between the total votes cast for the three polls within constituencies is usually 1.2 per cent, almost entirely accounted for by spoilt ballots. This is to say that almost all voters tend to vote for all three levels. This, however, was not the case last year, where the total anomalous vote between presidential and parliamentary votes cast was 455,667. The total anomalous vote between presidential and civic votes cast was 377,816. As the winning margin announced by the ECK was 231,728, both comparisons show inflations of the presidential votes sufficient to have altered the presidential outcome, given that the differences benefited Kibaki more than Odinga.

The discrepancy in the results announced by the ECK is huge and, in many cases, suspicious – a fact the IREC agrees with but explains away by claiming that there really were no discrepancies at all. While the IREC attributes all contradictions to addition errors that, by and large, disappear once the calculation is done correctly, such a conclusion would depend on all the Forms 16A being accurate. There is no guarantee – or even likelihood – that the ECK figures and forms that the IREC examined were not tampered with. It is evident from the report that the commission did not examine the truthfulness of the Forms 16A.

A report that says it is impossible to say who won the elections because the results were ‘irretrievably polluted’ cannot at the same time rule out the possibility of rigging at the Kenyatta International Conference Centre. It is not clear why the answer to that question should be ‘irrelevant’, especially since the commission concludes and emphasises that there was no evidence of rigging by the ECK at the KICC. If one relies on the ECK figures, one should be able to say who won.

The IREC constantly assumed that all errors were a result of incompetence rather than fraud. Why should that be the more credible interpretation? Twelve out of 13 people who gave evidence under oath were from the ECK. Why did the commission not question more witnesses, including all the returning officers from the 19 constituencies that were closely examined? Why did it rely largely on ECK testimonies?

The glaring discrepancy the commission displays when it comes to the standards for ‘evidence’ is appalling. While it sets rigorous standards for proving fraud at the KICC, it nevertheless uses sweeping generalisations as a basis for other conclusions. The entire report’s methodological treatment of sources is uneven.

When it comes to the role of civil society organisations in civic education, for example, the report uncritically reproduces critical voices from meetings around the country. It does not specify how many people said so, who these might have been, where, on what basis and with what credibility. The same goes for the critique directed at international observers. Rather than substantiate and qualify the information it collected during the meetings, the commission’s report is full of ‘some/many…claimed/thought’.

In Annex 4A of its report, the commission works hard to disprove every statement made by KPTJ in Countdown to Deception (a list of anomalies, malpractices and illegalities drawn from the statements of four of the five domestic election observers allowed into the ECK verification process the night before presidential results were announced). If the matter were not so serious, the commission’s bizarre obsession would be laughable. Yet, this attack raises serious concerns about the motives behind the IREC’s obvious hostility towards anything associated with KPTJ. Why is it that the IREC scrutinised KPTJ more critically than it did with the ECK, or other witnesses for that matter? The IREC’s very different and hostile treatment of KPTJ reveals Judge Johann Kriegler’s lie that civil society didn’t want to come forward with evidence.

Another example is the exit poll commissioned by the International Republican Institute. In the absence of reliable ECK data, the exit poll is an important source of information for discussing conclusions about the results. However, the IREC dismisses the relevance of the exit poll with very general statements about the need to be methodologically cautious.

While the commission takes note of abuse of state power and resources during the campaigns, there is only limited discussion of the role of the security agencies before, during and after the elections. More generally, the inescapable conclusion is that while the commission may be competent to carry out electoral analysis in a technical sense, it falls short of expectations in its political analysis. The commission fails to answer the more fundamental questions about power and responsibility.

The IREC shies away from any discussion of these burning issues, and there is, therefore, no convincing political context for interrogating the integrity of the elections.

More precisely, perhaps, the context provided in the report is highly selective. The IREC chairman has, in his public statements, alluded to a widespread culture of tolerance towards rigging. In line with this, the report states, ‘Kenyan society has long condoned, if not actively connived at, perversion of the electoral process.’ The troubling implication of such a description according to which everyone, from the bribed voter in the village to the ECK, is more or less equally guilty, is the way in which it tends to blame the victim and to remove most aspects of power and responsibility. It suggests that nobody is really more responsible than the next person.

CONCLUSION

A few simple changes in the IREC’s research design would have enabled them to diagnose the various problems that occurred, without a significant increase in the cost or effort required.

This approach would have allowed us to detect indications of fraud at the polling station and constituency-level, though not differentiate between the two, since vote re-counts would be required to verify the results of a given polling station. In addition, the approach would have allowed a clearer understanding of exactly how changes made at the KICC affected the outcome, after correcting for human arithmetic error on the 17A form.

This approach would have been superior to the one chosen by the IREC, in terms of both diagnosing problems within the structure of the ECK – in the form of the presiding or returning officer, and the KICC and so on – and the spatial location of likely fraud. And, this approach would have obviated the need for time- and effort-consuming re-tallies from the 16A form at the constituency level. These criticisms notwithstanding, engaging a document-management firm to re-tally all 16A forms was likely well within the budget and timeframe of the IREC, and would have provided the most comprehensive understanding of how and where fraud and human error affected the results of the 2007 general election. But the IREC chose not to.

There is obviously a story about the politics of the commission and how powerful actors may have influenced its work. Kriegler gave a clue of the interests at work when he was quoted in the Daily Nation of 30 August saying:

‘I’m not sure it is in anybody’s interest today to find out who won the election. The Government is functioning and the people have moved on. We had people who were enemies in the electoral contest, who seem to be getting used to working with one another, and the awkwardness is wearing off. I don’t think it’s in anybody’s interest to open the Pandora’s Box.’

Such a statement throws considerable doubt on whether the commission actually sought the truth.

It would not be accurate to say that Kriegler’s team did not do any work. Many of their recommendations are sound and echo Kenyans’ demands for a reformed electoral system over the years. Indeed, there is a need for a timetable and clear benchmarks for the rapid and full implementation of the recommendations.

KPTJ cannot, however, share the commission’s conclusion that no rigging took place by the ECK at the KICC. In addition to the recommendations in the report, the Attorney General needs to start criminal investigations against the ECK commissioners.

However unpopular it currently may be, it is morally and politically necessary to search for the truth about the elections. Some may consider it prudent and wise to ‘look forward and move on’, but it is highly irresponsible and dangerous to dodge the issues of electoral truth and justice.

By sweeping truth and justice under the carpet in the name of stability, Kenya will be embracing continued impunity. This may turn out be the most damaging effect of the Kriegler report.

* This report was jointly produced by www.africog.org.
* Please send comments to [email protected] or comment online at http://www.pambazuka.org/.