💫 This library is now integrated into spaCy v3.4 as
debug data
!
A simple tool to analyze the Spans in your dataset. It's tightly integrated with spaCy, so you can easily incorporate it to existing NLP pipelines. This is also a reproduction of Papay, et al's work on Dissecting Span Identification Tasks with Performance Prediction (EMNLP 2020).
Using pip:
pip install spacy-span-analyzer
Directly from source (I highly recommend running this within a virtual environment):
git clone git@github.com:ljvmiranda921/spacy-span-analyzer.git
cd spacy-span-analyzer
pip install .
You can use the Span Analyzer as a command-line tool:
spacy-span-analyzer ./path/to/dataset.spacy
Or as an imported library:
import spacy
from spacy.tokens import DocBin
from spacy_span_analyzer import SpanAnalyzer
nlp = spacy.blank("en") # or any Language model
# Ensure that your dataset is a DocBin
doc_bin = DocBin().from_disk("./path/to/data.spacy")
docs = list(doc_bin.get_docs(nlp.vocab))
# Run SpanAnalyzer and get span characteristics
analyze = SpanAnalyzer(docs)
analyze.frequency
analyze.length
analyze.span_distinctiveness
analyze.boundary_distinctiveness
Inputs are expected to be a list of spaCy Docs or a DocBin (if you're using the command-line tool).
In spaCy, you'd want to store your Spans in the
doc.spans
property, under a particular
spans_key
(sc
by default). Unlike the
doc.ents
property, doc.spans
allows
overlapping entities. This is useful especially for downstream tasks like Span
Categorization.
A common way to do this is to use
char_span
to define a slice from your
Doc:
doc = nlp(text)
spans = []
from annotation in annotations:
span = doc.char_span(
annotation["start"],
annotation["end"],
annotation["label"],
)
spans.append(span)
# Put all spans under a spans_key
doc.spans["sc"] = spans
You can also achieve the same thing by using
set_ents
or by creating a
SpanGroup.