List of all members.
Detailed Description
Calculates performance metrics from pairs of true and predicted labels for each test case.
Designated reject group is treated in a special way.
Constructor & Destructor Documentation
def MGT::PredProcessor::PerfMetrics::__init__ |
( |
|
self, |
|
|
|
test, |
|
|
|
pred, |
|
|
|
balanceCounts = True , |
|
|
|
maxLabel = None |
|
) |
| |
Constructor.
@param test sequence of test (true) labels (int)
@param pred sequence of predicted labels (int)
@param balanceCounts if True, use confusion matrix balanced by per-class test counts
@param maxLabel maximum value for labels plus one, if None - computed from actual test and pred values
@post this object has masked array attributes for specificity, sensitivity, true positives etc for each class
@todo Add additional metrics. One particular problem with per class specificities is when we
have no testing samples for a given class, then even one false positive will create
per class specificity of zero, weighing down the average specificity.
@todo check out MaskedArrayAlternative in scipy, which has (experimental) masked recod array -
this way we can create a single record array (table) with fields like sen,spe,tp etc and one record
for each class, and have each field in each record masked separately. Sort of like Excel table with
possible empty values.
Member Function Documentation
def MGT::PredProcessor::PerfMetrics::confMatrCsv |
( |
|
self, |
|
|
|
out, |
|
|
|
m = None |
|
) |
| |
Write confusion matrix in CSV format
Member Data Documentation
This label is designated for reject group.
Reject group contributes to sensitivity but has undefined specificity
The documentation for this class was generated from the following file:
- mgtaxa/MGT/PredProcessor.py