Software List

A BRIEF DESCRIPTION

DIESIRAE

DIESIRAE is a prototype for extracting and indexing knowledge from natural language documents. The underlying domain model relies on a conceptual level (described by means of a Domain Ontology; as our main industrial partner was a wine company, the Domain Ontology was related to winery), which represents the domain knowledge, and a lexical level (based on WordNet), which represents the domain vocabulary. A stochastic model (which mixes –in a novel way– HMM and Maximum Entropy models) stores the mapping between such levels, taking into account the linguistic context of words. Such a context not only contains the surrounding words, but also the morphologic and syntactic information extracted by means of Natural Language Processing tools. The stochastic model is then used to disambiguate word senses, during the document indexing phase. The Semantic Information Retrieval engine we developed supports simple keyword-based queries, as well as natural language-based queries. The engine is also able to extend the domain knowledge, discovering new, relevant concepts to add to the domain model.

DIESIRAE is part of the ArtDeco project (see the Projects section above).

Designed and developed by:
T. Barbieri, L. Mosca, and L. Sbattella

Architecture

Picture 1 of 9

A short demo of DIESIAE (a brief description on the query syntax can be found in the attached document)

We do not plan to release DIESIRAE, at this time, as it is still in an early stage of development.

Download File: DIESIRAE

AudioTact and L-MATH

Writing and reading formulas or perceiving the graph of a function stand in the way of a blind student for an efficient and complete study of Math. L-MATH provides the following modules to blind students.

  • BlindMath: a very efficient editor that allows blind users to insert a formula and obtain, as output, a Latex file.
  • TalkingMath: an application that is able to read a formula in a very efficient way by means of an original adaptative algorithm.
  • BlindGraph: an application that allows graph exploration.

A new cheap device, AudioTact, is able to generate, at the same time, sonorous and tactile stimulus, for images that have been opportunely enriched.

Looking at the graph of a function, a student can easily perceive elements of primary importance as: where the function grows or decreases; where the maximums, the minimums and the flexes are; where the points of discontinuity are. If the user knows them, she/he can find them, and understand how the graph proceeds. L-MATH permits the contemporary use of aural and haptic feedbacks to detect meaningful aspects.

Designed and developed by:
T. Barbieri, L. Mosca, and L. Sbattella

AudioTact: Pen for Vibrational and Audio Cues (prototype, Patented)

Picture 1 of 3

A short demo of L-MATH (Italian)

BiText

Nowadays, the world is becoming increasingly multilingual that the ability to speak a second language has never been as vital as it has been today. People can use many different ways to develop their proficiency in foreign languages, and reading is undoubtedly one of the best ways to improve their skills.

Reading a book written in original language is not easy task, however, and often leads to the reader giving-up the activity. From this point of view, reading multilingual books has many advantages for foreign-language learners; for example, they can read in the original language and use the translated text as a resource to verify their understanding.

Although multilingual books provide text in two languages, switching between them and finding the translations is not easy. Readers have to move their eyes back and forth from the original language text to the translation, and this will eventually cause breaks in reading activity. Moreover the reader, having the translated text too close to her/his visual field, tends to read it too often. Using two books, the one in original language and the other containing the translation, solves the latter problem, but accentuates the former.

As a solution, ebooks can be used to handle this valuable multilingual content representation efficiently without the need of moving reader’s eyes. BiText is a multilingual ebook reader for iOS iPad. The application exploits ebook natural “dynamicity” to encounter difficulties in understanding the foreign language content by uniting the original text with the literary translation. Furthermore, it makes use of gestural interfaces in order to avoid distractions while presenting original-language content and provides engagement in reading.

Single tap on the text is used to display line translation dialog. A popover object is used to display line translation and it hovers above the contents of a screen. Moreover it displays an arrow that indicates the point from which it emerged.

When users double tap on a text fragment, the literary translation appears above the original text. Similar to inline translation dialog, a popover object is used to display translation.

Users can touch on a footnote link and view additional information in the dialog. BiText reader supports two types of footnotes, one is added by the author and the other one is by the translator. author’s footnotes contain a translation but- ton next to footnote content. Users can tap on this button and view the translation of the footnote. Naturally translator’s footnote do not have a translation button since they are only available in translator’s language.

Summary and PDF of the thesis.

Designed and developed by:
G. Gokce, L. Sbattella, and R. Tedesco

eBook listing

Picture 1 of 4

A short demo of BiText

The source code will be released soon, as open-source software.

KEaKI

The tool tries to facilitate text comprehension, in particular for people with dyslexia, by means of summaries and mental maps. In a given domain, the tool defines an ontology describing actors and actions they are involved in. Verbal frames are used to map verbs (representing actions) and nouns (representing actors). A rule-based approach permits to select relevant portions of text, creating a summary. Moreover, the text can be represented as a graph (mental map). Finally, the tool permits to discover new concepts, adding them to the domain ontology.

KEaKI is part of the CATS project (see the Projects section above). Summary and PDF of the thesis (Italian).

Designed and developed by:
C. Borgnis, L. Sbattella, and R.Tedesco

KEaKI calculates a summary, and infers new concepts

Picture 1 of 2

A short demo of KEaKI.

A new version is currently under development. The source code will be released soon, as open-source software.

PoliSpell

Spellcheckers integrated with word processors work fine with common errors. But, if we consider mistakes made by people with dyslexia, their results are very bad. Actually, errors of people with dyslexia are quite peculiar (e.g., word splitting or merging), and conventional spellcheckers are not that able to deal with them. PoliSpell is a spellchecker and a predictor, for the Italian language, tailored to people with dyslexia. PoliSpell is able to correct real-word errors taking into account the context. In addition, PoliSpell sets some requisites that an user interface should have in order to present corrections and predictions returned by the tool. The prediction functionality is able to suggest the next most probable word, given the preceeding text.

PoliSpell is part of the CATS project (see the Projects section above).

Designed and developed by:
A. Quattrini Li, L. Sbattella, and R.Tedesco

A short demo of PoliSpell.

The source code will be released soon, as open-source software.

SPARTA2

Providing access to complex contents is a challenge authors are required to cope with. Such challenge is particularly hard if contents have to be accessed by people with cognitive and learning disabilities. SPARTA2 is a tool supporting the authoring of highly accessible texts. Using the tool, a text can be tailored to meet requirements of a specific target audience. The tool not only calculates the current readability level of the text, but also actively supports authors suggesting where the critical parts are, and how to modify them. The tool is integrated with the Word 2007 user interface and is particularly easy to use.

In contrast with other approaches, SPARTA2 makes a distinction between readability and understandability as, in our opinion, these concepts capture different aspects of the complexity of the text: a text could be highly readable, since the syntax is extremely simple, but extremely hard to understand because of the lexicon used. In our approach, readability gives an evaluation about the structure of sentences, while understandability captures the lexical aspects.

The Readability Index is composed of three sub-indexes: the Gulpease Index, the Chunk Index, and the Chunk Type Index.

The Gulpease Index is a widely used readability formula for the Italian language (SPARTA2 currently analyses Italian texts, however it is easy to adapt it for different languages). This approach, which is similar to the Flesch’s one – and widely adopted in the literature on readability – does not take in account the deep structure of the sentences.

The Chunk Index and the Chunk Type Index take in account the structure of the sentences in terms of chunks. These indexes are based on the analysis performed by the CHAOS Italian language shallow parser. In particular, the Chunk Index relates the number of chunks in a text to its readability. However, using the number of chunks does not consider the fact that different chunk types could have different readability. Thus, we added the Chunk Type Index, which is based on the distribution of chunk types in the text.

The Understandability Index measures the complexity related to the lexicon. The index is based on the De Mauro basic Italian dictionary, which contains the 4700 more used lemmas of the Italian language. The vocabulary is divided in three sections: basic vocabulary, highly used words vocabulary, and less used words vocabulary.

In our approach, we recognize that authors should be guided through the process of simplifying their texts. Thus, SPARTA2 is able to detect and report potential readability issues, analysing the structure and the lexicon of the sentences. This functionality fully exploits the chunk analysis performed by CHAOS: the result of the analysis is passed to a set of plug-ins, which can generate warnings to the user, suggesting also possible solutions.

The user interface of SPARTA2 is integrated into Word 2007; the indexes are visible to the author at all time, and can be updated by clicking on a button. SmatTags appear whenever a warning is reported to the user, and the related menus contain the solutions proposed by the plug-ins.

Designed and developed by:
A. Colombo, L. Sbattella, and R.Tedesco

The SPARTA2 architecture

Picture 1 of 4

A short demo of SPARTA2.


The source code will be released soon, as open-source software.

FLUtE

FLUtE (Fuzzy Logic Ultimate Engine) is a generic Fuzzy Logic Engine for the .NET Framework.
You can find it at the FLUtE SourceForge web site, released under the LGPL 2.1 license.

Designed and developed by:
F. Fontanella, L. Sbattella, and R. Tedesco

PrEmA

Many psychological and social studies highlighted two distinct channels we use to exchange information among us: an explicit, linguistic channel, and an implicit, paralinguistic channel. The latter contains information about the emotional state of the speaker, providing clues about the implicit meaning of the message. In particular, the paralinguistic channel can improve applications requiring human-machine interactions (for example, speech recognition systems or Conversational Agents), as well as support the analysis of human-human interactions (think, for example, of clinic or forensic applications).

PrEmA is a tool able to recognize and classify both emotions and speech style of the speaker, relying on prosodic features. In particular, speech-style recognition is, to our knowledge, new, and could be used to infer interesting clues about the state of the conversation. We selected two sets of prosodic features, and trained two classifiers, based on the Linear Discriminant Analysis.

Designed and developed by:
L. Colombo, C. Rinaldi, and L. Sbattella

Recognition of emotions

Picture 1 of 2

The source code will be released soon, as open-source software.

PoliNotes & PoliNotes 2

Nowadays, more and more university classes are given by means of slide-based presentations, possibly including multimedia contents. Advantages of such presentations are well-known: slide-based presentations permit teachers to prepare well-polished materials; moreover multimedia learning can enhance class effectiveness, facilitating retention and comprehension of concepts. Unfortunately, slide-based presentations also have shortcomings. In particular, taking notes becomes quite difficult and the result is often unsatisfactory. These issues particularly affect students who have problems in taking notes; for example, think of students with learning, motoric or sensorial impairments. The objective of our work is the design and development of a collection of software applications that allow convenient note-taking during slide-based classes. Such applications permit students to receive on their Tablet-PCs the slides presented by teachers, in real time. Students can edit objects contained in the slides, as well as add their own pen-based notes.

In the following images, on the left, a slide showing text, formulas, and text; on the right, OneNote –with PoliLips plug-in– showing slide content (text, formulas, images) as editable and annotatable objects.

PoliNotes is part of the CATS project (see the Projects section above). Summary and PDF of the thesis (Italian).

Designed and developed by:
A. Marrandino, L. Sbattella, and R. Tedesco

PoliNotes general idea

Picture 1 of 7

A short demo of PoliNotes.

PoliNotes 2 reimplements the client-side application, replacing OneNote with an ad-hoc “Windows Store” style application, on Windows 8. The new client application is simpler than OneNote yet providing useful features as, for example, the new “add space” gesture (see the video below).

We plan to release the source code as open-source software.

Designed and developed by:
Diogo Borges Krobath, L. Sbattella, and R. Tedesco

A short demo of PoliNotes 2.

Download File: PoliNotes & PoliNotes 2

PoliLips

The main idea of PoliLips arose during conversations with deaf students, at the MultiChancePoliTeam (the service for students with disability at Politecnico di Milano). Students reported a strong difficulty during class attendance (and their scores, got at the end of courses, confirmed the problem).

Several deaf students provided us an interesting clue, reporting that lipreading was their preferred compensation mechanism, sometimes mixed with aural information, but several factors affected lipreading effectiveness: some words are inherently hard to lip-read, some people can be particularly hard to understand (for example, who talks very fast), or, finally, the position of the speaker can prevent good observation of her/his facial movements. Thus, we started thinking whether a specific device could have increased the effectiveness of lipreading during classrooms.

PoliLips mixes the three information modalities we can collect from the teacher: visual (lipreading), aural, and (ASR generated) textual. In doing so our goal was twofold: first, we argued that each modality could have compensated for errors present in or induced by the others (for example, if the ASR had failed to transcribe a word, the student could have used lipreading to correct the error and understand the correct word); second, the resulting system could have been able to handle different degrees of hearing loss (especially profound deafness), and students’ preferences in compensation mechanisms.

PoliLips captures and sends to students’ laptops, via wired or wireless network, an audio/video/textual stream composed of a video of the teacher’s face, her/his voice, and a textual transcription performed by an ASR. PoliLips facilitates class attendance when the student cannot see the teacher’s face (for example, whenever the teacher writes on the blackboard) or the teacher is too far, or she/he not in front of the student. The device could be useful not only in university classrooms, but in whatever context where a speaker talks to a large audience, and network connections are available.

PoliLips is a hardware/software solution; the teacher wears a hardware device, while specific software applications are installed on teacher’s and students’ laptops. We designed and built the hardware, relying on off-the-shelf components, and developed the applications. ASR functionalities were provided by a commercial application.

PoliLips is part of the CATS project (see the Projects section above).

Designed and developed by:
L. Sbattella and R.Tedesco

Microphone

Picture 1 of 8

A short demo of PoliLips.

The source code will be released soon, as open-source software.

PoliBook

PoliBook emulates a paper book, on a Windows 8 tablet, permitting students to add notes and sheets. The application provides different views:

  • Opened book: two pages are shown on the display
  • Two opened books: two books are opened, and one page per book is shown; the two books are navigable independently
  • Book plus notebook: one page of the opened book is shown, side-by-side with a page of the notebook associated to the current book; book and notebook are navigable independently

The user can:

  • Navigate the book (by page turning, TOC, etc.)
  • Annotate book pages, using a stylus
  • Add sheets with notes “inside” the book

Designed and developed by:
Gert Petja, Licia Sbattella, Roberto Tedesco

Annotations

Picture 1 of 5

A short demo of PoliBook.

The application is currently under development; we plan to release the source code as soon as possible.

Upload Section

Upload disabled. Log in to upload applications.