How good is a patent?
This is of course a highly subjective question, affected by many different factors such as who is asking the question, how legally valid the patent is, etc etc.
Nonetheless sometimes objective numbers are wanted. Not necessarily to be the final arbiter of patent quality and value, but sometimes for the following reasons, such as to:
- identify where to put the most effort into managing
- support decisions about patent renewal
- support decisions about buying patents
A while ago Ambercite developed the AmberScore metric for this very reason. More recently Ambercite has moved from analysing single patents to analysing patent families, and as part of this we have rerun the AmberScore metrics. We have also renormalised it – it is now set that the average value for all families in 1.0.
Anything above 1 is above average, and there is no upper limit on AmberScore values for exceptional patents. AmberScore values can range from 0 upwards (a zero suggests that there are no known citation links for any family member).
Amberscore values are provided as part of the output when you search on patents in Cluster Searching.
How is AmberScore calculated?
AmberScore is based on the number and strength of the citation connections for the patent family, and how recent the forward citation connections are. Because it is based on patent families, all family member have the same score, whether filed in New Zealand or the US. So really, this is about the quality of the underlying invention rather than the individual patent.
What drives high Amberscore values?
Patents with high AmberScore values tend to have large patent families, with these families attracting lots of forward and backwards citations. Or in other words:
- the owner has invested heavily in these patents (large families),
- they have filed in to a crowded area (lot of other people have invested in patents in this area)
- and then lots of applicants have tried to improve on this invention.
All are thought to be predictors of higher value patents.
For example, consider the ‘Steve Jobs patent’ US7479949, filed to protect the first iPhone, and which has an Amberscore value of 39.5. This is a part of a large family of patent publications. We estimate there are 81 earlier patent families that are cited by this family, and 113 families that were cited afterward (it is difficult to directly compare our citation counts to other data for forward citation counts as we only count one citation per patent family – to avoid duplication). Hence the high score, much higher than the average value of 1.
Also note that the same high score would apply to every family member, such as EP2527969, KR20090046960A or CA2658413 for example
Now compare this to say WO2004103505, filed for a Doll Toy. This was only ever filed as a PCT patent and a Japanese patent application. There are 6 citation links to earlier patent families, and just 1 citation link to a later family. Its AmberScore value is 0.04 – much lower than the AmberScore for the Steve Jobs patent, suggesting that is not as valuable.
At this point in time any decent patent analyst would say ‘this is kinda obvious – it would have taken me seconds to come to the same conclusion‘.
And indeed it is. But the value of these sort of metrics is that you don’t have to spend the seconds – so you can cover a lot of ground quickly. And this can perhaps challenge your view of what patents you think are important. It is not to say that you will be wrong in your opinion – only that this provides a second opinion.
Are half the patents higher than average?
No – AmberScore values are not normally distributed. In the case of US patents, for example, the average AmberScore value for patents filed in the last 20 year is 1.7 but the median is 0.67. In other words, half of all patents have AmberScore value of less than 0.67.
What are the limitations?
There are a few – no metric is perfect. This is only as good as the available citation data, and this can be sparse for very recent publications, or for patents filed in countries where patent citation data is sparse or not published.
I sometimes liken using citation based tools to say using advanced tools for say prospecting for diamonds undergrond . You run a test over a bit of ground and get a result. What then?
- A ‘positive’ signal means that you need to investigate further – to start digging in other wods
- No signal means that nothing was picked up in this test – There may still mean that there is say diamonds buried beneath the ground, but given limited resources you may want to save your digging for a more promising location.
And this is certainly a lot better than digging everywhere.
How does this compare to other patent rating metrics?
Other patent quality metrics are available from different sources. They all tend to work on slightly different models. We like AmberScore because it is scalable, fast, granular, avoids questionable assumptions about what drives patent quality, and objective. It can cover patents from many different countries, as shown above. Other vendors will claim other advantages for their metrics.
A high scoring patent by AmberScore will probably be high scoring in most other metrics.
Can we run metrics for very large portfolios?
Very easily – please contact us for details. We have this analysis on a portfolio of 20,000 patents, for example.