# Plan ## The Idea The DBpedia-Entity repository has base rankings for a select amount of retrieval algorithms for multiple sets of queries. These base rankings were obtained by running the algorithms on the dataset, where the dataset was reduced to contain only a subset of all possible fields. In particular, the fields used by the base rankings were: | Field | Description | Predicates | Notes | | --- | --- | --- | --- | | Names | Names of the entity | ``, ``, ``, ``, ``, ``, ``, ``, ``, ``, ``, ``, ``, ``, ``, ``, ``, ``, ``, ``, `` | | | Categories | Entity types | `` | | | Similar entity names | Entity name variants | `!`, `!`, `` | `!` denotes reverse direction (i.e. ``) | | Attributes | Literal attibutes of entity | All ``, where *"o"* is a literal and *"p"* is not in *Names*, *Categories*, *Similar entity names*, and blacklist predicates.For each `` triple, if `p matches ` both *p* and *o* are stored (i.e. *"p o"* is indexed). | | | Related entity names | URI relations of entity| Similar to *Attributes* field, but *"o"* should be a URI. | | Of the following files from the 2015-10 dump: - `anchor_text_en.ttl` - `article_categories_en.ttl` - `disambiguations_en.ttl` - `infobox_properties_en.ttl` - `instance_types_transitive_en.ttl` - `labels_en.ttl` - `long_abstracts_en.ttl` - `mappingbased_literals_en.ttl` - `mappingbased_objects_en.ttl` - `page_links_en.ttl` - `persondata_en.ttl` - `short_abstracts_en.ttl` - `transitive_redirects_en.ttl` There are two indexes that are used for this result. Both Indexes are likely implemented by the Nordlys package that we will describe below. ###Index A - A new field called "catchall" is used; it encompass the content of all other fields. Duplicate values are not removed in this field. ###Index B - Anchor texts (i.e. contents of `` predicate) are added to both "similar entity names" and "attributes" fields. - Entity URIs are resolved differently for the "related entity names" field. Names for related entities are extracted in the same way as it is done for "names" field (see predicates for "names" in the above table), but only one arbitrary name is used for each related entity. - Category URIs are resolved using `category_labels_en.ttl` file - Predicate URIs are resolved using `infobox_property_definitions_en.ttl` file. If a name for a predicate is not defined, a predicate is omitted. More information about the way it was indexed can be fond [here](https://iai-group.github.io/DBpedia-Entity/index_details.html). Our hypothesis is that not all of the fields are of similar importance. As such, our idea is to use some kind of Hill-Climbing algorithm to determine just what combination of fields (or possible weights) produces the best output. ## Nordlys Nordlyss is a toolkit for entity-oriented and semantic search. It currently supports four entity-oriented tasks, which could be useful for our project. These entity-oriented tasks are: - `Entity cataloging` - `Entity retrieval` Returns a ranked list of entities in response to a query - `Entity linking in queries` Identifies entities in a query and links them to the corresponding entry in the Knowledge base - `Target type identification` Detects the target types (or categories) of a query The Nordlys toolkit was used to create the results described above, as such, it provides us with the means to reproduce these results. In addition, Nordlys provides a Python interface that can be used to implement the Hill Climbing algorithm. The data that is used by the results is also bundled with the Nordlys Python package, and has already been indexed. This allows us to use the Python package without having to convert/index the data ourselves.