Evidence ratings

One of the tools that I need to develop for the thesis is a consolidated database of the different evidence that has been cited in support of secret visitor claims. This would catalogue the finds, collate any previous analysis or commentary on individual items and to look for patterns in the data. It is a big job. My best estimate is that there are about 300 separate items that have been claimed, from shipwrecks, through to rock engravings, coins, different types of plants and a host of other types of items.

The evidence ranges from those with quite detailed background information on their discovery and identification, such as the Ptolemy VI coin found in north Queensland and Lawrence Hargrave’s claims, through to ones which can be as vague as ‘a farmer in northern Queensland between the wars found a rock, which he said looked Egyptian’. We can assess the value of the more detailed finds as evidence, by looking at the circumstances of their discovery and who found them, through to the quality of the identification and whether alternative explanations can be offered. With the vaguest we cannot say anything except that it is hearsay and does not constitute actual evidence. It is similar to the distinction made by archaeologists of the informational value of two identical artefacts, one found on the surface and the other in a specific soil layer on a site.

To try to put some boundaries around this I have been experimenting with evidence rating that highlights the difference between those items put forward with lots of information versus those with little or none. If properly applied it should separate out evidence that is worth the time and effort to assess and that which is too vague to even bother with. Note that this is not a measure of either how plausible the evidence is, nor how accurate is the identification; it is about whether the piece of evidence comes with enough context for me to begin to treat it seriously.

After trying several schemes the one I am trying gives each item two scores out of 10 points. The first was on the context of its discovery – asking who, when and where, each scored out of a maximum 3 points. An additional point is given if the discovery is broadcast to the public within 5 years. The higher the score, the more we know about the circumstances of an item’s discovery. The second score measures the descriptive information – is there enough to allow us to reassess the identification. The measured variables are detail of description, quality of documentation and who made the identification that it was an anomalous artefact. An extra point is given if the artefact survives. Together the scores tell us how well we can provide a discovery provenance and a description of the evidence. Both are needed to properly evaluate proposed evidence.

Here is how the scores are allocated.

Context score



Who found it


0 – Not identified

1 – Broad and unspecific, eg ‘a Gympie farmer’

2 – Named individual – not verified as existing or unspecified member of a family

3 – Named individual – confirmed

When was it found


0 – Not identified or more than 25 year range

1 – General date, eg ‘between the wars’

2 – Date within a decade

3 – Specific year

Where was it found


0 – Not identified or only to a state

1 – Broad – region of state, or section of coastline, eg ‘on the Great Barrier Reef’

2 – Located within an area, eg ‘north of Gosford’

3 – Identifiable location



0 – Not reported within 10 years of discovery

1 – Reported within 10 years of discovery


10 pts

Description score


Score 0

Description of item


0 – Not specified

1 – General description of item, but no diagnostic or distinctive features

2 – General description of item, some possible supporting features

3 – Very detailed description allowing fresh evaluation



0 – No documentation

1 – Very poor graphic – insufficient for analysis

2 – Poor picture or graphic – insufficient for analysis but can do comparisons

3 – Good quality photo or drawing



0 – No source for identification or basis for claim

1 – Identified by unqualified / inexperienced person

2 – Identified by unnamed expert / inappropriate expert

3 – Identified by named, qualified / experienced expert



0 – Item no longer known to survive or cannot be accessed

1 – Item survives and can be re-examined


10 pts

Each item is evaluated on all criteria and given two scores out of 10, given as context, then description. When graphed my expectation is that the evidence will fall into two clusters, a small group of generally well-described items at the top right of the graph [7+/7+] and a much larger cluster of hearsay evidence that scores very low [1-4/1-4].

Individual scores will be given on a summary score page which is found here. The format for individual scoring gives all scores. Presumably as the work proceeds scores can change to reflect new information.

Name of item Context
Main claimant Description

As an example

Ptolemy VI coin Context






Terry, Gilroy Description






At the moment I am trying to populate the blog with items for which I have material to hand, so its not really systematically addressing all evidence.

To see the individual scoring for different items considered to date go here.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: