Patch against Text::EtText 2.3: Fixes a number of bugs, particularly with regard to XHTML validation. Plus slightly modified handling of link definitions.
Personal Projects: Projects - completed, future and works-in-progress.
A Comparative Study on Word Embeddings in Deep Learning for Text Classification: Word embeddings act as an important component of deep models for providing input features in downstream language tasks, such as sequence labelling and text classification. In the last decade, a substantial number of word embedding methods have been proposed for this purpose, mainly falling into the categories of classic and context-based word embeddings. In this paper, we conduct controlled experiments to systematically examine both classic and contextualised word embeddings for the purposes of text classification. To encode a sequence from word representations, we apply two encoders, namely CNN and BiLSTM, in the downstream network architecture. To study the impact of word embeddings on different datasets, we select four benchmarking classification datasets with varying average sample length, comprising both single-label and multi-label classification tasks. The evaluation results with confidence intervals indicate that CNN as the downstream encoder outperforms BiLSTM in most situations, especially for document context-insensitive datasets. This study recommends choosing CNN over BiLSTM for document classification datasets where the context in sequence is not as indicative of class membership as sentence datasets. For word embeddings, concatenation of multiple classic embeddings or increasing their size does not lead to a statistically significant difference in performance despite a slight improvement in some cases. For context-based embeddings, we studied both ELMo and BERT. The results show that BERT overall outperforms ELMo, especially for long document datasets. Compared with classic embeddings, both achieve an improved performance for short datasets while the improvement is not observed in longer datasets.
ACRE: Agent Conversation Reasoning Engine: Within Multi Agent Systems, communication by means of Agent Communication Languages has a key role to play in the co-operation, co-ordination and knowledge-sharing between agents. Despite this, complex reasoning about agent messaging and specifically about conversations between agents, tends not to have widespread support amongst general-purpose agent programming languages. ACRE (Agent Communication Reasoning Engine) aims to complement the existing logical reasoning capabilities of agent programming languages with the capability of reasoning about complex interaction protocols in order to facilitate conversations between agents. This paper outlines the aims of the ACRE project and gives details of the functioning of a prototype implementation within the AFAPL2 agent programming language.
A Decade of Legal Argumentation Mining: Datasets and Approaches: The growing research field of argumentation mining (AM) in the past ten years has made it a popular topic in Natural Language Processing. However, there are still limited studies focusing on AM in the context of legal text (Legal AM), despite the fact that legal text analysis more generally has received much attention as an interdisciplinary field of traditional humanities and data science. The goal of this work is to provide a critical data-driven analysis of the current situation in Legal AM. After outlining the background of this topic, we explore the availability of annotated datasets and the mechanisms by which these are created. This includes a discussion of how arguments and their relationships can be modelled, as well as a number of different approaches to divide the overall Legal AM task into constituent sub-tasks. Finally we review the dominant approaches that have been applied to this task in the past decade, and outline some future directions for Legal AM research.
A Deep Learning Model for Heterogeneous Dataset Analysis - Application to Winter Wheat Crop Yield Prediction: Western countries rely heavily on wheat, and yield prediction is crucial. Time-series deep learning models, such as Long Short Term Memory (LSTM), have already been explored and applied to yield prediction. Existing literature reports that they perform better than traditional Machine Learning (ML) models. However, the existing LSTM cannot handle heterogeneous datasets (a combination of data that varies and remains static with time). In this paper, we propose an efficient deep learning model that can deal with heterogeneous datasets. We developed the system architecture and applied it to the real-world dataset in the digital agriculture area. We showed that it outperformed the existing ML models.
AF-ABLE in the Multi Agent Contest 2009: This is the second year in which a team from University College Dublin has participated in the Multi Agent Contest. This paper describes the system that was created to participate in the contest, along with observations of the team's experiences in the contest. The system itself was built using the AFAPL agent programming language running on the Agent Factory platform. A hybrid control architecture inspired by the SoSAA strategy aided in the separation of concerns between low-level behaviours (such as movement and obstacle evasion) and higher-level planning and strategy.
AF-ABLE: System Description: This paper describes our entry to the Multi-Agent Program- ming Contest 2009. Based on last year's entry, we incorporated new features of the employed agent programming language and adopted a simplified hierarchical organisation metaphor. This approach, together with a re-design of the task allocation algorithm, should result in increased efficiency and effectiveness.
A HOTAIR Scalability Model: This paper describes a scalable mathematical model for dynamically calculating the number of agents to optimally handle the current load within the Highly Organised Team of Agents for Information Retrieval (HOTAIR) architecture.
An Agent-Based Approach to Component Management: This paper details the implementation of a software framework that aids the development of distributed and self-configurable software systems. This framework is an instance of a novel integration strategy called SoSAA (SOcially Situated Agent Architecture), which combines Component-Based Software Engineering and Agent-Oriented Software Engineering, drawing its inspiration from hybrid agent control architectures. The framework defines a complete construction process by enhancing a simple component-based framework with reasoning and self-awareness capabilities through a standardized interface. The capabilities of the resulting framework are demonstrated through its application to a non-trivial Multi Agent System (MAS). The system in question is a pre-existing Information Retrieval (IR) system that has not previously taken advantage of CBSE principles. In this paper we contrast these two systems so as to highlight the benefits of using this new hybrid approach. We also outline how component-based elements may be integrated into the Agent Factory agent-oriented application framework. Categories
A Neural Meta-Model for Predicting Winter Wheat Crop Yield: This study presents the development and evaluation of machine learning models to predict winter wheat crop yield using heterogeneous soil and weather data sets. A concept of an error stabilisation stopping mechanism is introduced in an LSTM model specifically designed for heterogeneous datasets. The comparative analysis of this model against an LSTM model highlighted its superior predictive performance. Furthermore, weighted regression models were developed to capture environmental factors using agroclimatic indices. Finally, a neural meta model was built by combining the predictions of several individual models. The experimental results indicated that a neural meta model with an MAE of 0.82 and RMSE of 0.983 tons/hectare demonstrated a notable performance, highlighting the importance of incorporating weighted regression models based on agroclimatic indices. This study shows the potential for improved yield prediction through the proposed model and the subsequent development of a meta model.
Applying Machine Learning Diversity Metrics to Data Fusion in Information Retrieval: The Supervised Machine Learning task of classification has parallels with Information Retrieval (IR): in each case, items (documents in the case of IR) are required to be categorised into discrete classes (relevant or non-relevant). Thus a parallel can also be drawn between classifier ensembles, where evidence from multiple classifiers are combined to achieve a superior result, and the IR data fusion task. This paper presents preliminary experimental results on the applicability of classifier ensemble diversity metrics in data fusion. Initial results indicate a relationship between the quality of the fused result set (as measured by MAP) and the diversity of its inputs.
Argument Mining with Graph Representation Learning: Argument Mining (AM) is a unique task in Natural Language Processing (NLP) that targets arguments: a meaningful logical structure in human language. Since the argument plays a significant role in the legal field, the interdisciplinary study of AM on legal texts has significant promise. For years, a pipeline architecture has been used as the standard paradigm in this area. Although this simplifies the development and management of AM systems, the connection between different parts of the pipeline causes inevitable shortcomings such as cascading error propagation. This paper presents an alternative perspective of the AM task, whereby legal documents are represented as graph structures and the AM task is undertaken as a hybrid approach incorporating Graph Neural Networks (GNNs), graph augmentation and collective classification. GNNs have been demonstrated to be an effective method for representation learning on graphs, and they have been successfully applied to many other NLP tasks. In contrast to previous pipeline-based architecture, our approach results in a single end-to-end classifier for the identification and classification of argumentative text segments. Experiments based on corpora from both the European Court of Human Rights (ECHR) and the Court of Justice of the European Union (CJEU) show that our approach achieves strong results compared to state-of-the-art baselines. Both the graph augmentation and collective classification steps are shown to improve performance on both datasets when compared to using GNNs alone.
A Self-Configuring Agent-Based Document Indexing System: This paper describes an extensible and scalable approach to indexing documents that is utilized within the Highly Organised Team of Agents for Information Retrieval (HOTAIR) architecture.
Assessing the Influencing Factors on the Accuracy of Underage Facial Age Estimation: Swift response to the detection of endangered minors is an ongoing concern for law enforcement. Many child-focused investigations hinge on digital evidence discovery and analysis. Automated age estimation techniques are needed to aid in these investigations to expedite this evidence discovery process, and decrease investigator exposure to traumatic material. Automated techniques also show promise in decreasing the overflowing backlog of evidence obtained from increasing numbers of devices and online services. A lack of sufficient training data combined with natural human variance has been long hindering accurate automated age estimation -- especially for underage subjects. This paper presented a comprehensive evaluation of the performance of two cloud age estimation services (Amazon Web Service's Rekognition service and Microsoft Azure's Face API) against a dataset of over 21,800 underage subjects. The objective of this work is to evaluate the influence that certain human biometric factors, facial expressions, and image quality (i.e. blur, noise, exposure and resolution) have on the outcome of automated age estimation services. A thorough evaluation allows us to identify the most influential factors to be overcome in future age estimation systems.
A Survey on Microservices Trust Models for Open Systems: The microservices architecture (MSA) is a form of distributed systems architecture that has been widely adopted in large-scale software systems in recent years. As with other distributed system architectures, one of the challenges that MSA faces is establishing trust between the microservices, particularly in the context of open systems. The boundaries of open systems are unlimited and unknown, which means they can be applied to any use case. Microservices can leave or join an open system arbitrarily, without restriction as to ownership or origin, and scale extensively. The organisation of microservices (in terms of the roles they play and the communication links they utilise) can also change in response to changes in the environment that the system is situated in. The management of trust within MSAs is of great importance as the concept of trust is critical to microservices communication, and the operation of an open MSA system is highly reliant on communication between these fine-grained microservices. Thus, a trust model should also be able to manage trust in an open environment. Current trust management solutions, however, are often domain-specific and many are not specifically tailored towards the open system model. This motivates research on trust management in the context of open MSA systems. In this paper, we examine existing microservices trust models, identify the limitations of these models in the context of the principles of open microservices systems, propose a set of qualities for open microservices trust models that emerge from these limitations, and assess selected microservices trust models using the proposed qualities.
Augmenting Agent Platforms to Facilitate Conversation Reasoning: Within Multi Agent Systems, communication by means of Agent Communication Languages (ACLs) has a key role to play in the co-operation, co-ordination and knowledge-sharing between agents. De- spite this, complex reasoning about agent messaging, and specifically about conversations between agents, tends not to have widespread support amongst general-purpose agent programming languages. ACRE (Agent Communication Reasoning Engine) aims to complement the existing logical reasoning capabilities of agent programming languages with the capability of reasoning about complex interaction protocols in order to facilitate conversations between agents. This paper outlines the aims of the ACRE project and gives details of the functioning of a prototype implementation within the Agent Factory multi agent framework.
A User Configurable Metric for Clustering in Wireless Sensor Networks: Wireless Sensor Networks (WSNs) are comprised of thousands of nodes that are embedded with limited energy resources. Clustering is a well-known technique that can be used to extend the lifetime of such a network. However, user adaption is one criterion that is not taken into account by current clustering algorithms. Here, the term "user" refers to application developer who will adjust their preferences based on the application specific requirements of the service they provide to application users. In this paper, we introduce a novel metric named Communication Distance (ComD), which can be used in clustering algorithms to measure the relative distance between sensors in WSNs. It is tailored by user configuration and its value is computed from real time data. These features allow clustering algorithms based on ComD to adapt to user preferences and dynamic environments. Through experimental and theoretical studies, we seek to deduce a series of formulas to calculate ComD from Time of Flight (ToF), Radio Signal Strength Indicator (RSSI), node density and hop count according to some user profile.
BJUT at TREC 2016 OpenSearch: Search Ranking Based on Clickthrough Data: In this paper we describe our efforts for the TREC OpenSearch task. Our goal for this year is to evaluate the effectiveness of: (1) a ranking method using information crawled from an authoritative search engine; (2) search rank- ing based on clickthrough data taken from user feedback; and (3) a unified modeling method that combines knowledge from the web search engine and the users' clickthrough data. Fi- nally, we conduct extensive experiments to evaluate the pro- posed framework on the TREC 2016 OpenSearch data set, with promising results.
Call Graph Profiling for Multi Agent Systems: The design, implementation and testing of Multi Agent Systems is typically a very complex task. While a number of specialist agent programming languages and toolkits have been created to aid in the de- velopment of such systems, the provision of associated development tools still lags behind those available for other programming paradigms. This includes tools such as debuggers and profilers to help analyse system behaviour, performance and efficiency. AgentSpotter is a profiling tool designed specifically to operate on the concepts of agent-oriented programming. This paper extends previous work on AgentSpotter by discussing its Call Graph View, which presents system performance information, with reference to the communication between the agents in the system. This is aimed at aiding developers in examining the effect that agent communication has on the processing requirements of the system.
Call Graph Profiling for Multi Agent Systems.: The design, implementation and testing of Multi Agent Systems is typically a very complex task. While a number of specialist agent programming languages and toolkits have been created to aid in the development of such systems, the provision of associated development tools still lags behind those available for other programming paradigms. This includes tools such as debuggers and profilers to help analyse system behaviour, performance and efficiency. AgentSpotter is a profiling tool designed specifically to operate on the concepts of agent-oriented programming. This paper extends previous work on AgentSpotter by discussing its Call Graph View, which presents system performance information, with reference to the communication between the agents in the system. This is aimed at aiding developers in examining the effect that agent communication has on the processing requirements of the system.
Can Domain Pre-training Help Interdisciplinary Researchers from Data Annotation Poverty? A Case Study of Legal Argument Mining with BERT-based Transformers: Interdisciplinary Natural Language Processing (NLP) research traditionally suffers from the requirement for costly data annotation. However, transformer frameworks with pre-training have shown their ability on many downstream tasks including digital humanities tasks with limited small datasets. Considering the fact that many digital humanities fields (e.g. law) feature an abundance of non-annotated textual resources, and the recent achievementsled by transformer models, we pay special attention to whether domain pre-training will enhance transformer's performance on interdisciplinary tasks and how. In this work, we use legal argument mining as our case study. This aims to automatically identify text segments with particular linguistic structures (i.e., arguments) from legal documents and to predict the reasoning relations between marked arguments. Our work includes a broad survey of a wide range of BERT variants with different pre-training strategies. Our case study focuses on: the comparison of general pre-training and domain pre-training; the generalisability of different domain pre-trained transformers; and the potential of merging general pre-training with domain pre-training. We also achieve better results than the current transformer baseline in legal argument mining.
Challenging the Norm in the Teaching of Practical Computer Science: The teaching of practical sessions in Computer Science frequently tends to follow a standard pattern: large numbers of students work in isolation on a particular assignment, enlisting help from whichever demonstrator is available at the relevant time. This model has a number of inherent difficulties. In some cases, each demonstrator may not have the same approach to solving the problem at hand, which can lead to confusion amongst students. Also, it is frequently the case that demonstrators find it difficult to identify those students in most need of additional assistance as the numbers involved are prohibitively large. This paper describes the restructuring of a first year undergraduate computer science module in UCD. An Active Learning Laboratory was built, mimicking that of the University of Minnesota to allow learning to take place with a group focus: students provide support to others, share their work with their class and actively work to problem-solve both independently and with benefit to their peers. This facilitated the subdividing of classes into groups, to which a specific demonstrator was assigned, so as to bridge the gap between students and educators by helping to build stronger relationships between them. Encouraging students to work as groups aids interaction between them, and also strengthens the learning for students that aid classmates with the material. The practical aspect of this year's COMP 10050 module involved the use of a 3D, interactive, animation, programming environment for building virtual worlds called Alice (developed in Carnegie Mellon University) that the students used to create their assignments and projects. In addition to the restructuring of the practical sessions, the course content was also altered so as to place the work in which the students engage in a better context. This includes engagement with the strong research community in the school, as well as industry professionals, so as to see interesting and practical applications of relevant technologies.
Classification for Crisis-Related Tweets Leveraging Word Embeddings and Data Augmentation: This paper presents University College Dublin's (UCD) work at TREC 2019-B Incident Streams (IS) track. The purpose of the IS track is to find actionable messages and estimate their priority among a stream of crisis-related tweets. Based on the track's requirements, we break down the task into two sub-tasks. One is defined as a multi-label classification task that categorises upcoming tweets into different aid requests. The other is defined as a single-label classification task that estimates these tweets with four different levels of priority. For the track, we submitted four runs, each of which uses a different model for the tasks. Our baseline run trains classification models with hand-crafted features through machine learning methods, namely Logistic Regression and Na{\textbackslash}"ive Bayes. Our other three runs train classification models with different deep learning methods. The deep methods include a vanilla bidirectional long short-term memory recurrent neural network (biLSTM), an adapted biLSTM, and a bi-attentive classification network (BCN) with pre-trained contextualised ELMo embedding. For all the runs, we apply different word embeddings (in-domain pre-trained, word-level pre-trained GloVe, character-level, or ELMo embeddings) and data augmentation strategies (SMOTE, loss weights, or GPT-2) to explore the influence they have on performance. Evaluation results show that our models perform better than the median for most situations.
CodEX: Source Code Plagiarism Detection Based on Abstract Syntax Trees: CodEX is a source code search engine that allows users to search a repository of source code snippets using source code snippets as the query also. A potential use for such a search engine is to help educators identify cases of plagiarism in students' programming assignments. This paper evaluates CodEX in this context. Abstract Syntax Trees (ASTs) are used to represent source code files on an abstract level. This, combined with node hashing and similarity calculations, allows users to search for source code snippets that match suspected plagiarism cases. A number of commonly-employed techniques to avoid plagiarism detection are identified, and the CodEX system is evaluated for its ability to detect plagiarism cases even when these techniques are employed. Evaluation results are promising, with 95{\textbackslash}{{\%}} of test cases being identified successfully.
Combining Machine Learning and Logical Reasoning to Improve Requirements Traceability Recovery: Maintaining traceability links of software systems is a crucial task for software management and development. Unfortunately, dealing with traceability links are typically taken as afterthought due to time pressure. Some studies attempt to use information retrieval-based methods to automate this task, but they only concentrate on calculating the textual similarity between various software artifacts and do not take into account the properties of such artifacts. In this paper, we propose a novel traceability link recovery approach, which comprehensively measures the similarity between use cases and source code by exploring their particular properties. To this end, we leverage and combine machine learning and logical reasoning techniques. On the one hand, our method extracts features by considering the semantics of the use cases and source code, and uses a classification algorithm to train the classifier. On the other hand, we utilize the relationships between artifacts and define a series of rules to recover traceability links. In particular, we not only leverage source code\’s structural information, but also take into account the interrelationships between use cases. We have conducted a series of experiments on multiple datasets to evaluate our approach against existing approaches, the results of which show that our approach is substantially better than other methods.
Crisis Domain Adaptation Using Sequence-to-Sequence Transformers: User-generated content (UGC) on social media can act as a key source of information for emergency responders incrisis situations. However, due to the volume concerned, computational techniques are needed to effectively filter and prioritise this content as it arises during emerging events. In the literature, these techniques are trained using annotated content from previous crises. In this paper, we investigate how this prior knowledge can be best leveraged for new crises by examining the extent to which crisis events of a similar type are more suitable for adaptation tonew events (cross-domain adaptation). Given the recent successes of transformers in various language processing tasks, we propose CAST: an approach for Crisis domain Adaptation leveraging Sequence-to-sequence Transformers. We evaluate CAST using two major crisis-related message classification datasets. Our experiments show that ourCAST-based best run without using any target data achieves the state of the art performance in both in-domain and cross-domain contexts. Moreover, CAST is particularly effective in one-to-one cross-domain adaptation when trained with a larger language model. In many-to-one adaptation where multiple crises are jointly used as the source domain, CAST further improves its performance. In addition, we find that more similar events are more likely to bring better adaptation performance whereas fine-tuning using dissimilar events does not help for adaptation. To aid reproducibility, we open source our code to the community.
Current Challenges and Future Research Areas for Digital Forensic Investigation: Given the ever-increasing prevalence of technology in modern life, there is a corresponding increase in the likelihood of digital devices being pertinent to a criminal investigation or civil litigation. As a direct consequence, the number of investigations requiring digital forensic expertise is resulting in huge digital evidence backlogs being encountered by law enforcement agencies throughout the world. It can be anticipated that the number of cases requiring digital forensic analysis will greatly increase in the future. It is also likely that each case will require the analysis of an increasing number of devices including computers, smartphones, tablets, cloud-based services, Internet of Things devices, wearables, etc. The variety of new digital evidence sources poses new and challenging problems for the digital investigator from an identification, acquisition, storage and analysis perspective. This paper explores the current challenges contributing to the backlog in digital forensics from a technical standpoint and outlines a number of future research topics that could greatly contribute to a more efficient digital forensic process.
Current State of the Art and Future Directions: Augmented Reality Data Visualization To Support Decision-Making: Augmented Reality (AR), as a novel data visualization tool, is advantageous in revealing spatial data patterns and data-context associations. Accordingly, recent research has identified AR data visualization as a promising approach to increasing decision-making efficiency and effectiveness. As a result, AR has been applied in various decision support systems to enhance knowledge conveying and comprehension, in which the different data-reality associations have been constructed to aid decision-making. However, how these AR visualization strategies can enhance different decision support datasets has not been reviewed thoroughly. Especially given the rise of Big Data in the modern world, this support is critical to decision-making in the coming years. Using AR to embed the decision support data and explanation data into the end user's physical surroundings and focal contexts avoids isolating the human decision-maker from the relevant data. Integrating the decision-maker's contexts and the DSS support in AR is a difficult challenge. This paper outlines the current state of the Art through a literature review in allowing AR data visualization to support decision-making. To facilitate the publication classification and analysis, the paper proposes one taxonomy to classify different AR data visualization based on the semantic associations between the AR data and physical context. Based on this taxonomy and a decision support system taxonomy, 37 publications have been classified and analyzed from multiple aspects. One of the contributions of this literature review is a resulting AR visualization taxonomy that can be applied to decision support systems. Along with this novel tool, the paper discusses the current state of Art in this field and indicates possible future challenges and directions that AR Data Visualization will bring to support decision-making.
Delivering Intelligent Home Energy Management with Autonomous Agents: This poster discusses the design and implementation of the decision making/reasoning infrastructure of an intelligent home energy management system that was developed as part of the Autonomic Home Area Network Infrastructure (AUTHENTIC) Project. Specifically, the poster focuses on the Agent Factory Micro Edition (AFME) functionality that enables the Home Area Network (HAN) to be managed for two home energy management scenarios representative of this space. The energy management system was tested and deployed in both laboratory and real home settings.
Delivering Multi-agent MicroServices Using CArtAgO: This paper describes an agent programming language agnostic implementation of the Multi-Agent MicroServices (MAMS) model - an approach to integrating agents within microservices-based architectures. In this model, agents, deployed within microservices, expose aspects of their state as virtual resources that are externally accessible using REpresentational State Transfer (REST). Virtual resources are implemented as CArtAgO artifacts, exposing their state to the agent as a set of observable properties. Coupled with a set of artifact operations, this enables the agent to monitor and manage its own resources. In the paper, we formally model our approach, defining passive and active resource management strategies, and illustrate its use within a worked example.
Dublin Bogtrotters: Agent Herders: This paper describes an entry to the Multi-Agent Programming Contest 2008. The approach employs the pre-existing Agent Factory framework and extends this framework in line with experience gained from its use within the robotics domain.
Enhancing Legal Argument Mining with Domain Pre-training and Neural Networks: The contextual word embedding model, BERT, has proved its ability on downstream tasks with limited quantities of annotated data. BERT and its variants help to reduce the burden of complex annotation work in many interdisciplinary research areas, for example, legal argument mining in digital humanities. Argument mining aims to develop text analysis tools that can automatically retrieve arguments and identify relationships between argumentation clauses. Since argumentation is one of the key aspects of case law, argument mining tools for legal texts are applicable to both academic and non-academic legal research. Domain-specific BERT variants (pre-trained with corpora from a particular background) have also achieved strong performance in many tasks. To our knowledge, previous machine learning studies of argument mining on judicial case law still heavily rely on statistical models. In this paper, we provide a broad study of both classic and contextual embedding models and their performance on practical case law from the European Court of Human Rights (ECHR). During our study, we also explore a number of neural networks when being combined with different embeddings. Our experiments provide a comprehensive overview of a variety of approaches to the legal argument mining task. We conclude that domain pre-trained transformer models have great potential in this area, although traditional embeddings can also achieve strong performance when combined with additional neural network layers.
Estimating Probabilities for Effective Data Fusion: Data Fusion is the combination of a number of independent search results, relating to the same document collection, into a single result to be presented to the user. A number of probabilistic data fusion models have been shown to be effective in empirical studies. These typically attempt to estimate the probability that particular documents will be relevant, based on training data. However, little attempt has been made to gauge how the accuracy of these estimations affect fusion performance. The focus of this paper is twofold: firstly, that accurate estimation of the probability of relevance results in effective data fusion; and secondly, that an effective approximation of this probability can be made based on less training data that has previously been employed. This is based on the observation that the distribution of relevant documents follows a similar pattern in most high-quality result sets. Curve fitting suggests that this can be modelled by a simple function that is less complex than other models that have been proposed. The use of existing IR evaluation metrics is proposed as a substitution for probability calculations. Mean Average Precision is used to demonstrate the effectiveness of this approach, with evaluation results demonstrating competitive performance when compared with related algorithms with more onerous requirements
Evaluating Automated Facial Age Estimation Techniques for Digital Forensics: In today's world, closed circuit television, cellphone photographs and videos, open-source intelligence (i.e., social media and web data mining), and other sources of photographic evidence are commonly used by police forces to identify suspects and victims of both online and offline crimes. Human characteristics such as age, height, weight, gender, hair color, etc., are often used by police officers and witnesses in their description of unidentified suspects. In certain circumstances, the age of the victim can result in the determination of the crime's categorization, e.g., child abuse investigations. Various automated machine learning-based techniques have been implemented for the analysis of digital images to detect soft-biometric traits, such as age and gender, and thus aid detectives and investigators in progressing their cases. This paper documents an evaluation of existing cognitive age prediction services. The evaluative and comparative analysis of the various services was executed to identify trends and issues inherent to their performance. One significant contributing factor impeding the accurate development of the services investigated is the notable lack of sufficient sample images in specific age ranges, i.e., underage and elderly. To overcome this issue, a dataset generator was developed, which harnesses collections of several unbalanced datasets and forms a balanced, curated dataset of digital images annotated with their corresponding age and gender.
Evaluating Communication Strategies in a Multi Agent Information Retrieval System: With the complexity of computer systems increasing with time, the need for systems that are capable of managing themselves has become an important consideration in the Information Technology industry. In this paper, we discuss HOTAIR: a scalable, autonomic Multi- Agent Information Retrieval System. In particular, we focus on the incorporation of self-configuring and self-optimising features into the system. We investigate two alternative methods by which the system can configure itself in order to perform its task. We also discuss the Performance Management element, whose aim is to optimise system performance.
Evaluating Communication Strategies in a Multi Agent Information Retrieval System: With the complexity of computer systems increasing with time, the need for systems that are capable of managing themselves has become an important consideration in the Information Technology industry. In this paper, we discuss HOTAIR: a scalable, autonomic Multi- Agent Information Retrieval System. In particular, we focus on the incorporation of self-configuring and self-optimising features into the system. We investigate two alternative methods by which the system can configure itself in order to perform its task. We also discuss the Performance Management element, whose aim is to optimise system performance.
Evaluation of a Conversation Management Toolkit for Multi Agent Programming: The Agent Conversation Reasoning Engine (ACRE) is in- tended to aid agent developers with the management of conversations to improve the management and reliability of agent communication. To evaluate its effectiveness, a problem was presented to two groups of undergraduate students, one of which was required to create a solution using the features of ACRE and one without. This paper describes the requirements that the evaluation scenario was intended to meet and how these motivated the design of the problem that was presented to the subjects. The solutions were analysed using a combination of simple objective metrics and subjective analysis, which indicated a number of benefits of using ACRE. In particular, sub- jective analysis suggested that ACRE by defaults prevents some common problems arising that would limit the reliability and extensibility of conversation-handling code.
Evaluation of a Conversation Management Toolkit for Multi Agent Programming: The Agent Conversation Reasoning Engine (ACRE) is intended to aid agent developers to improve the management and reliability of agent communication. To evaluate its effectiveness, a problem scenario was created that could be used to compare code written with and without the use of ACRE by groups of test subjects. This paper describes the requirements that the evaluation scenario was intended to meet and how these motivated the design of the problem. Two experiments were conducted with two separate sets of students and their solutions were analysed using a combination of simple objective metrics and subjective analysis. The analysis suggested that ACRE by default prevents some common problems arising that would limit the reliability and extensibility of conversation-handling code. As ACRE has to date been integrated only with the Agent Factory multi agent framework, it was necessary to verify that the problems identified are not unique to that platform. Thus a comparison was made with best practice communication code written for the Jason platform, in order to demonstrate the wider applicability of a system such as ACRE.
EviPlant: An Efficient Digital Forensic Challenge Creation, Manipulation and Distribution Solution: Education and training in digital forensics requires a variety of suitable challenge corpora containing realistic features including regular wear-and-tear, background noise, and the actual digital traces to be discovered during investigation. Typically, the creation of these challenges requires overly arduous effort on the part of the educator to ensure their viability. Once created, the challenge image needs to be stored and distributed to a class for practical training. This storage and distribution step requires significant time and resources and may not even be possible in an online/distance learning scenario due to the data sizes involved. As part of this paper, we introduce a more capable methodology and system as an alternative to current approaches. EviPlant is a system designed for the efficient creation, manipulation, storage and distribution of challenges for digital forensics education and training. The system relies on the initial distribution of base disk images, i.e., images containing solely base operating systems. In order to create challenges for students, educators can boot the base system, emulate the desired activity and perform a ``diffing'' of resultant image and the base image. This diffing process extracts the modified artefacts and associated metadata and stores them in an ``evidence package''. Evidence packages can be created for different personae, different wear-and-tear, different emulated crimes, etc., and multiple evidence packages can be distributed to students and integrated into the base images. A number of additional applications in digital forensic challenge creation for tool testing and validation, proficiency testing, and malware analysis are also discussed as a result of using EviPlant.
Expediting MRSH-v2 Approximate Matching with Hierarchical Bloom Filter Trees: Perhaps the most common task encountered by digital forensic investigators consists of searching through a seized device for pertinent data. Frequently, an investigator will be in possession of a collection of ``known-illegal'' files (e.g. a collection of child pornographic images) and will seek to find whether copies of these are stored on the seized drive. Traditional hash matching techniques can efficiently find files that precisely match. However, these will fail in the case of merged files, embedded files, partial files, or if a file has been changed in any way. In recent years, approximate matching algorithms have shown significant promise in the detection of files that have a high bytewise similarity. This paper focuses on MRSH-v2. A number of experiments were conducted using Hierarchical Bloom Filter Trees to dramatically reduce the quantity of pairwise comparisons that must be made between known-illegal files and files on the seized disk. The experiments demonstrate substantial speed gains over the original MRSH-v2, while maintaining effectiveness.
Experimenting with Ensembles of Pre-Trained Language Models for Classification of Custom Legal Datasets: Document corpora owned by law and regula- tory firms pose significant challenges for text classification; being multi-labelled, highly imbalanced, often having a relatively small number of instances and a large word count per instance. Deep learning ensemble methods can improve generalization and performance for multi-label text classification but using pre-trained language models as base learners leads to high computational costs. To tackle the imbalance problem and improve generalization we present a fast, pseudo-stratified sub-sampling method that we use to extract diverse data subsets to create base models for deep ensembles based on fine-tuned models from pre-trained transformers with moderate computational cost such as BERT, RoBERTa, XLNet and Albert. A key feature of the sub-sampling method is that it preserves the characteristics of the entire dataset (particularly the labels' frequency distribution) while extracting subsets. This sub-sampling method is also used to extract smaller size custom datasets from the freely available LexGLUE legal text corpora. We discuss approaches used and classification performance results with deep learning ensembles, illustrating the effectiveness of our approach on the above custom datasets.
Explicit Modelling of Resources for Multi-Agent MicroServices Using the CArtAgO Framework: This paper describes the first agent programming language agnostic implementation of the Multi-Agent MicroServices (MAMS) model - an approach to integrating agents within microservices-based architectures where agents expose aspects of their state as virtual resources, realised as CArtAgO artifacts, that are externally accessible through REpresentational State Transfer (REST).
Exploring AOP from an OOP Perspective: Agent-Oriented Programming (AOP) researchers have successfully developed a range of agent programming languages that bridge the gap between theory and practice. Unfortunately, despite the in-community success of these languages, they have proven less compelling to the wider software engineering community. One of the main problems facing AOP language developers is the need to bridge the cognitive gap that exists between the concepts underpinning mainstream languages and those underpinning AOP. In this paper, we attempt to build such a bridge through a conceptual mapping between Object-Oriented Programming (OOP) and the AgentSpeak(L) family of AOP languages. This mapping explores how OOP concepts and the concurrent programming concept of threads relate to AgentSpeak(L) concepts. We then use our analysis of this mapping to drive the design of a new programming language entitled ASTRA.
Extending Probabilistic Data Fusion Using Sliding Windows: Recent developments in the field of data fusion have seen a focus on techniques that use training queries to estimate the probability that various documents are relevant to a given query and use that information to assign scores to those documents on which they are subsequently ranked. This paper introduces SlideFuse, which builds on these techniques, introducing a sliding window in order to compensate for situations where little relevance information is available to aid in the estimation of probabilities. SlideFuse is shown to perform favourably in comparison with CombMNZ, ProbFuse and SegFuse. CombMNZ is the standard baseline technique against which data fusion algorithms are compared whereas ProbFuse and SegFuse represent the state-of-the-art for probabilistic data fusion methods.
Giving Mobile Devices a SIXTH Sense: Introducing the SIXTH Middleware for Augmented Reality Applications: With the increasing availability of sensors within smartphones and within the world at large, a question arises about how this sensor data can be leveraged by Augmented Reality (AR) devices. AR devices have traditionally been limited by the capability of a given device's unique set of sensors. Connecting sensors from multiple devices using a Sensor Web could address this problem. Through leveraging this SensorWeb existing AR environments could be improved and new scenarios made possible, with devices that previously could not have being used as part of an AR environment. This paper proposes the use of SIXTH: a middleware designed to generate a Sensor Web, which allows a device to leverage heterogeneous external sensors within its environment to help facilitate the creation of richer AR experiences. This paper will present a worst case scenario, in which the device chosen will be a see- through, Android-based Head Mounted Display that has no access to sensors. This device is transformed into an AR device through the creation of a Sensor Web allowing it to sense its environment facilitated through the use of SIXTH.
Hierarchical Bloom Filter Trees for Approximate Matching: Bytewise approximate matching algorithms have in recent years shown significant promise in detecting files that are similar at the byte level. This is very useful for digital forensic investigators, who are regularly faced with the problem of searching through a seized device for pertinent data. A common scenario is where an investigator is in possession of a collection of ``known-illegal'' files (e.g. a collection of child abuse material) and wishes to find whether copies of these are stored on the seized device. Approximate matching addresses shortcomings in traditional hashing, which can only find identical files, by also being able to deal with cases of merged files, embedded files, partial files, or if a file has been changed in any way. Most approximate matching algorithms work by comparing pairs of files, which is not a scalable approach when faced with large corpora. This paper demonstrates the effectiveness of using a ``Hierarchical Bloom Filter Tree'' (HBFT) data structure to reduce the running time of collection-against-collection matching, with a specific focus on the MRSH-v2 algorithm. Three experiments are discussed, which explore the effects of different configurations of HBFTs. The proposed approach dramatically reduces the number of pairwise comparisons required, and demonstrates substantial speed gains, while maintaining effectiveness
Hybrid Agent & Component-Based Management of Backchannels: This paper describes the use of the SoSAA software framework to implement the hybrid management of communication channels (backchannels) across a distributed software system. SoSAA is a new integrated architectural solution enabling context-aware, open and adaptive software while preserving system modularity and promoting the re-use of existing component-based and agent-oriented frameworks and associated methodologies. In particular, we show how SoSAA can be used to orchestrate the adoption of network adapter components to bind functional components that are distributed across different component contexts. Both the performance of the different computational nodes involved and the efficiencies and faults in the underlying transport layers are taken into account when deciding which transport mechanisms to use.
Improving Borderline Adulthood Facial Age Estimation through Ensemble Learning: Achieving high performance for facial age estimation with subjects in the borderline between adulthood and non-adulthood has always been a challenge. Several studies have used different approaches from the age of a baby to an elder adult and different datasets have been employed to measure the mean absolute error (MAE) ranging between 1.47 to 8 years. The weakness of the algorithms specifically in the borderline has been a motivation for this paper. In our approach, we have developed an ensemble technique that improves the accuracy of underage estimation in conjunction withour deep learning model (DS13K) that has been fine-tuned on the Deep Expectation (DEX) model. We have achieved an accuracy of 68\% for the age group 16 to 17 years old, which is 4 times better than the DEX accuracy for such age range. We also present an evaluation of existing cloud-based and offline facial age prediction services, such as Amazon Rekognition, Microsoft Azure Cognitive Services, How-Old.net and DEX.
Intelligent Decision-Making in the Physical Environment: The issue of situating intelligent agents within an environment, either virtual or physical, is an important research question in the area of Multi Agent Systems. In addition, the deployment of agents within Wireless Sensor Networks has received some focus also. This paper proposes an architecture to augment the reasoning capabilities of agents with an abstraction of a physical sensing environment over which it has control. This architecture combines the SIXTH sensor middleware platform with the ASTRA agent programming language, using CArtAgO as the intermediary abstraction.
Internalising Interaction Protocols as First-Class Programming Elements in Multi Agent Systems: Since their inception, Multi Agent Systems (MASs) have been championed as a solution for the increasing problem of software complexity. Communities of distributed autonomous computing entities that are capable of collaborating, negotiating and acting to solve complex organisational and system manage- ment problems are an attractive proposition. Central to this is the requirement for agents to possess the capability of interacting with one another in a struc- tured, consistent and organised manner. This thesis presents the Agent Conversation Reasoning Engine (ACRE), which constitutes a holistic view of communication management for MASs. ACRE is intended to facilitate the practical development, debugging and deployment of communication-heavy MASs. ACRE has been formally defined in terms of its operational semantics, and a generic architecture has been proposed to facilitate its integration with a wide variety of diverse agent development frameworks and Agent Oriented Pro- gramming (AOP) languages. A concrete implementation has also been de- veloped that uses the Agent Factory AOP framework as its base. This allows ACRE to be used with a number of different AOP languages, while providing a reference implementation that other integrations can be modelled upon. A standard is also proposed for the modelling and sharing of agent-focused in- teraction protocols that is independent of the platform within which a concrete ACRE implementation is run. Finally, a user evaluation illustrates the benefits of incorporating conversation management into agent programming.
MAMS: Multi Agent MicroServices: This paper explores the intersection between microservices and Multi-Agent Systems (MAS), introducing an approach to building MAS known as Multi-Agent MicroServices (MAMS). This approach is motivated in the context of the main properties of microservices and is illustrated through a worked example of a Vickrey Auction implemented as a microservice. The motivation for this work is to facilitate the creation of MAS that can be deployed using the same infrastructure as Hypermedia Systems; offering a closer and more natural integration with hypermedia resources. Further, we believe that our approach enables the creation of reusable components that can be interwoven within both larger MAS and more traditional microservices ecosystems.
Multi-Task Transfer Learning for Finding Actionable Information from Crisis-Related Messages on Social Media: The Incident streams (IS) track is a research challenge aimed at finding important information from social media during crises for emergency response purposes. More specifically, given a stream of crisis-related tweets, the IS challenge asks a participating system to 1) classify what the types of users' concerns or needs are expressed in each tweet, known as the information type (IT) classification task and 2) estimate how critical each tweet is with regard to emergency response, known as the priority level prediction task. In this paper, we describe our multi-task transfer learning approach for this challenge. Our approach leverages state-of-the-art transformer models including both encoder-based models such as BERT and a sequence-to-sequence based T5 for joint transfer learning on the two tasks. Based on this approach, we submitted several runs to the track. The returned evaluation results show that our runs substantially outperform other participating runs in both IT classification and priority level prediction.
On the Benefits of Information Retrieval and Information Extraction Techniques Applied to Digital Forensics: Many jurisdictions suffer from lengthy evidence processing backlogs in digital forensics investigations. This has negative consequences for the timely incorporation of digital evidence into criminal investigations, while also affecting the timelines required to bring a case to court. Modern technological advances, in particular the move towards cloud computing, have great potential in expediting the automated processing of digital evidence, thus reducing the manual workload for investigators. It also promises to provide a platform upon which more sophisticated automated techniques may be employed to improve the process further. This paper identifies some research strains from the areas of Information Retrieval and Information Extraction that have the potential to greatly help with the efficiency and effectiveness of digital forensics investigations.
On the Evaluation of Data Fusion for Information Retrieval: Data Fusion combines document rankings from multiple systems into one, in order to improve retrieval effectiveness. Many approaches to this task have been proposed in the literature, and these have been evaluated in various ways. This paper examines a number of such evaluations, to extract commonalities between approaches. Some drawbacks of the prevailing evaluation strategies are then identified, and suggestions made for more appropriate evaluation of data fusion.
Pervasive Sensing: Addressing the Heterogeneity Problem: Pervasive sensing is characterized by heterogeneity across a number of dimensions. This raises significant problems for those designing, implementing and deploying sensor networks, irrespective of application domain. Such problems include for example, issues of data provenance and integrity, security, and privacy amongst others. Thus engineering a network that is fit-for-purpose represents a significant challenge. In this paper, the issue of heterogeneity is explored from the perspective of those who seek to harness a pervasive sensing element in their applications. A initial solution is proposed based on the middleware construct.
Practical Development of Hybrid Intelligent Agent Systems with SoSAA: The development of intelligent Multi Agent Systems (MAS) is a non-trivial task. While much past research has focused on high-level activities such as co-ordination and negotiation, the development of tools and strategies to address the lower-level concerns of such systems is a more recent focus. SoSAA (Socially Situated Agent Architecture) is a strategy for the integration of high-level MASs on one hand with component-based systems on the other. Under the SoSAA strategy, a component-based system is used to provide the lower-level implementation of agent tasks and capabilities, allowing for the agent layer to concentrate on high-level intelligent co-ordination and organisation. This paper provides a practical perspective on how SoSAA can be used in the development of intelligent MASs, illustrating this by demonstrating how it can be used to manage backchannel transport services.
Probabilistic Data Fusion on a Large Document Collection: Data Fusion is the process of combining the output of a number of Information Retrieval algorithms into a single result set, to achieve greater retrieval performance. ProbFuse is a probabilistic data fusion algorithm that has been shown to outperform the CombMNZ algorithm in a number of previous experiments. This paper builds upon this previous work and applies probFuse to the much larger Web Track document collection from the 2004 Text REtreival Conference.
Probabilistic Data Fusion on a Large Document Collection: Data Fusion is the process of combining the output of a number of Information Retrieval (IR) algorithms into a single result set, to achieve greater retrieval performance. ProbFuse is a data fusion algorithm that uses the history of the underlying IR algorithms to estimate the probability that subsequent result sets include relevant documents in particular positions. It has been shown to out- perform CombMNZ, the standard data fusion algorithm against which to compare performance, in a number of previous experiments. This paper builds upon this previous work and applies probFuse to the much largerWeb Track document collection fromthe 2004 Text REtreival Conference. The performance of probFuse is compared against that of CombMNZ using a number of evaluation measures and is shown to achieve substantial performance improvements.
Probability-Based Fusion of Information Retrieval Result Sets: Information Retrieval (IR) forms the basis of many information management tasks. Information management itself has becomd an extermely important area as the amount of electronically available information increases dramatically. There are numerous methods of performing the IR tasks both by utilising different techniques and through using different representations of the information available to us. It has been shown that some algorithms outperform others on certain tasks. Very little progress has been made in fusing various techniques to improve the overall retrieval performance of a system. This paper introduces a Probability-Based Fusion technique probFuse which shows initial promise in addressing this question. It also compares probFuse with the common CombMNZ data fusion technique.
Probability-Based Fusion of Information Retrieval Result Sets: Information Retrieval (IR) forms the basis of many information management tasks. Information management itself has become an extremely important area as the amount of electronically available information increases dramatically. There are numerous methods of performing the IR task both by utilising different techniques and through using different representations of the information available to us. It has been shown that some algorithms outperform others on certain tasks. Very little progress has been made in fusing various techniques to improve the overall retrieval performance of a system. This paper introduces a probability-based fusion technique probFuse that shows initial promise in addressing this question. It also compares probFuse with the common CombMNZ data fusion technique.
ProbFuse: A Probabilistic Approach to Data Fusion: Data fusion is the combination of the results of independent searches on a document collection into one single output result set. It has been shown in the past that this can greatly improve retrieval effectiveness over that of the individual results. This paper presents probFuse, a probabilistic approach to data fusion. ProbFuse assumes that the performance of the individual input systems on a number of training queries is indicative of their future performance. The fused result set is based on probabilities of relevance calculated during this training process. Retrieval experiments using data from the TREC ad hoc collection demonstrate that probFuse achieves results superior to that of the popular CombMNZ fusion algorithm.
ProbFuse: Probabilistic Data Fusion: In recent years, the proliferation of information being made available in such domains as the World Wide Web, corporate intranets and knowledge management systems and the "information overload" problem have caused Information Retrieval(IR) to change from a niche research area into a multi-billion dollar industry. Many approaches to this task of identifying documents that satisfy a user's information need have been proposed by numerous researchers. Due to this diversity of methods employed to perform IR, retrieval systems rarely return the same documents in response to the same queries. This has led to research being carried out in the fields of data fusion and metasearch, which seek to improve the quality of the results being presented to the user by combining the outputs of multiple IR algorithms or systems into a single result set. This thesis introduces probFuse, a probabilistic data fusion algorithm. ProbFuse uses the results of a number of training queries to build a profile of the distribution of relevant documents in the result sets that are produced by its various input systems. These distributions are used to calculate the probability of relevance for documents returned in subsequent result sets and this is used to produce a final fused result set to be returned to the user. ProbFuse has been evaluated on a number of test collections, ranging from small collections such as Cranfield and LISA to the Web Track collection from the TREC-2004 conference. For each of these collections, probFuse achieved significantly superior performance to CombMNZ, a data fusion algorithm often used as baseline against which to compare new techniques.
Reflecting on Agent Programming with AgentSpeak(L): Agent-Oriented Programming (AOP) researchers have successfully developed a range of agent programming languages that bridge the gap between theory and practice. Unfortunately, despite the in-community success of these languages, they have proven less compelling to the wider software engineering community. One of the main problems facing AOP language developers is the need to bridge the cognitive gap that exists between the concepts underpinning mainstream languages and those underpinning AOP. In this paper, we attempt to build such a bridge through a conceptual mapping that we subsequently use to drive the design of a new programming language entitled ASTRA, which has been evaluated by a group of experienced software engineers attending an Agent-Oriented Software Engineering Masters course.
Semantic Network Management for Next Generation Networks: To accommodate the proliferation of heterogeneous network models and protocols, the use of semantic technologies to enable an abstract treatment of networks is proposed. Network adapters are employed to lift network specific data into a semantic representation. Semantic reasoning integrates the disparate network models and protocols into a commondatamodel by making intelligent inferences from low-level network and device details. Automatic discovery of new devices,monitoring of device state, and invocation of device actions in a generic fashion that is agnostic of network types is enabled. A prototype system called SNoMAC is described that employs the proposed approach operating over UPnP, TR-069, and heterogeneous sensors. These sensors are integrated by means of a sensor middleware named SIXTH that augments the capabilities of SNoMAC to allow for intelligent management and configuration of awide variety of sensor devices.A major benefit of this approach is that the addition of new models, protocols, or sensor types merely involves the development of a new network adapter based on an ontology. Additionally, the semantic representation of the network and associated data allows for a variety of client interfaces to facilitate human input to the management and monitoring of the system.
Separation of Concerns in Hybrid Component and Agent Systems: Modularising requirements is a classic problem of software engineering; concerns often overlap, requiring multiple dimensions of decomposition to achieve separation. Whenever complete modularity is unachievable, it is important to provide principled approaches to the decoupling of concerns. To this end, this paper discusses the Socially Situated Agent Architecture (SoSAA) - a complete construction methodology, which leverages existing well established research and associated methodologies and frameworks in both the Agent-oriented and Component-based Software Engineering domains. As a software framework, SoSAA is primarily intended to serve as a foundation on which to build agent based applications by promoting separation of concerns in the development of open, heterogeneous, adaptive and distributed systems. While previous work has discussed the design rationale for SoSAA and illustrated its application to the construction of multiagent systems, this paper focuses on the separation of concerns issue. It highlights concerns typically addressed in the development of distributed systems, such as adaptation, concurrency, fault-tolerance. It analyses how a hybrid agent/component integration approach can improve the separation of these concerns by leveraging modularity constructs already available in agent and component systems, and sets clear guidelines on where the different concerns must be addressed within the overall architecture. Finally, this paper provides a first evaluation of the application of our framework by applying well- known metrics to a distributed information retrieval case study, and by discussing how this initial results can be projected to a typical multiagent application developed with the same hybrid approach.
SIXTH: Cupid for the Sensor Web: With the vast number of sensors on current and future mobile computing devices, as well as within our environment, a revolution in HCI is taking place. Devices with multiple sensors enable navigation applications, location-based searches, touch-based interfaces with haptic feedback and the promise of Augmented Reality, with devices such as Google Glass on the horizon. These new promising devices possess many sensors but may lack a specific sensor required for a given desired interaction. This paper proposes a solution using the SIXTH middleware platform to act as a matchmaker between devices to share sensor data by means of a sensor web. A brief exemplar case study is presented, where a device designed originally as a sensor-less optical see-through video player becomes self-adaptive by changing its display in response to ambient light data made accessible through a sensor web.
SIXTH Middleware for Sensor Web Enabled AR Applications: We increasingly live in a world where sensors have become truly ubiquitous in nature. Many of these sensors are an integral part of devices such as smartphones, which contain sufficient sensors to allow for their use as Augmented Reality (AR) devices. This AR experience is limited by the precision and functionality of an individual device's sensors and the its capacity to process the sensor data into a useable form. This paper discuss the current work on a mobile version of the SIXTH middleware which allows for creation of Sensor Web enabled AR applications. This paper discusses current work on mobile SIXTH, which involves the creation of a sensor web between different Android and non-Android devices. This has led to several small demonstrators which are discussed in this work in progress paper. Future work on the project will be outline the aims of the project to allow for the integration of additional devices so as to explore new abilities such as leveraging additional proprieties of those devices.
Smart Home Energy Management: Autonomically managing energy within the home is a formidable challenge as any solution needs to interoperate with a decidedly heterogeneous network of sensors and appliances, not just in terms of technologies and protocols but also by managing smart as well as "dumb" appliances. Furthermore, as studies have shown that simply providing energy usage feedback to homeowners is inadequate in realising long-term behavioural change, autonomic energy management has the potential to deliver concrete and lasting energy savings without the need for user interventions. However, this necessitates that such interventions be performed in an intelligent and context-aware fashion, all the while taking into account system as well as user constraints and preferences. Thus this chapter proposes the augmentation of home area networks with autonomic computing capabilities. Such networks seek to support opportunistic decision-making pertaining to the effective energy management within the home by seamlessly integrating a range of off-the-shelf sensor technologies with a software infrastructure for deliberation, activation and visualisation.
SoSAA: A Framework for Integrating Components and Agents: Modern computing systems require powerful software frameworks to ease their development and manage their complexity. These issues are addressed within both Component-Based Software Engineering and Agent-Oriented Software Engineering, although few integrated solutions exist. This paper discusses a novel integration strategy, which builds upon both paradigms to address their shortcomings while leveraging their different characteristics to define a complete software framework.
Space-Time Diagram Generation for Profiling Multi Agent Systems: Advances in Agent Oriented Software Engineering have fo- cused on the provision of frameworks and toolkits to aid in the creation of Multi Agent Systems (MAS). However, despite the inherent complexity of such systems, little progress has been made in the development of tools to allow for the debugging and understanding of their inner workings. This paper introduces a novel performance analysis system, named AgentSpotter, facilitates such analysis. AgentSpotter was developed by mapping conventional profiling concepts to the domain of MASs. We outline its integration in to the Agent Factory multi agent toolkit.
Spatiotemporal Object Detection and Activity Recognition: Spatiotemporal object detection and activity recognition are essential components in the advancement of computer vision, with broad applications spanning surveillance, autonomous driving, and smart stores. This chapter offers a comprehensive overview of the techniques and applications associated with these concepts. Beginning with an introduction to the fundamental principles of object detection and activity recognition, we discuss the challenges and limitations posed by existing methods. The chapter progresses to explore spatiotemporal object detection and activity recognition, which entails capturing spatial and temporal information of moving objects in video data. A hierarchical model for spatiotemporal object detection and activity recognition is proposed, designed to maintain spatial and temporal connectivity across frames. Additionally, the chapter outlines various metrics for evaluating the performance of object detection and activity recognition models, ensuring their accuracy and effectiveness in real-world applications. Finally, we underscore the significance of spatiotemporal object detection and activity recognition in diverse fields such as surveillance, autonomous driving, and smart stores, emphasizing the potential for further research and development in these areas. In summary, this chapter provides a thorough examination of spatiotemporal object detection and activity recognition, from the foundational concepts to the latest techniques and applications. By presenting a hierarchical model and performance evaluation metrics, the chapter serves as a valuable resource for researchers and practitioners seeking to harness the power of computer vision in a variety of domains.
The UCD-Net System at SemEval-2020 Task 1: Temporal Referencing with Semantic Network Distances: This paper describes the UCD system entered for SemEval 2020 Task 1: Unsupervised Lexical Semantic Change Detection. We propose a novel method based on distance between temporally referenced nodes in a semantic network constructed from a combination of the time specific corpora. We argue for the value of semantic networks as objects for transparent exploratory analysis and visualisation of lexical semantic change, and present an implementation of a web application for the purpose of searching and visualising semantic networks. The results of the change measure used for this task were not among the best performing systems, but further calibration of the distance metric and backoff approaches may improve this method.
Towards an Open Science Platform for the Evaluation of Data Fusion: Combining the results of different search engines in order to improve upon their performance has been the subject of many research papers. This has become known as the ``Data Fusion'' task, and has great promise in dealing with the vast quantity of unstructured textual data that is a feature of many Big Data scenarios. However, no universally-accepted evaluation methodology has emerged in the community. This makes it difficult to make meaningful comparisons between the various proposed techniques from reading the literature alone. Variations in the datasets, metrics, and baseline results have all contributed to this difficulty. This paper argues that a more unified approach is required, and that a centralised software platform should be developed to aid researchers in making comparisons between their algorithms and others. The desirable qualities of such a system have been identified and proposed, and an early prototype has been developed. Re-implementing algorithms published by other researchers is a great burden on those proposing new techniques. The prototype system has the potential to greatly reduce this burden and thus encourage more comparable results being generated and published more easily.
Transformer-Based Multi-task Learning for Disaster Tweet Categorisation: Social media has enabled people to circulate information in a timely fashion, thus motivating people to post messages seeking help during crisis situations. These messages can contribute to the situational awareness of emergency responders, who have a need for them to be categorised according to information types (i.e. the type of aid services the messages are requesting). We introduce a transformer-based multi-task learning (MTL) technique for classifying information types and estimating the priority of these messages. We evaluate the effectiveness of our approach with a variety of metrics by submitting runs to the TREC Incident Streams (IS) track: a research initiative specifically designed for disaster tweet classification and prioritisation. The results demonstrate that our approach achieves competitive performance in most metrics as compared to other participating runs. Subsequently, we find that an ensemble approach combining disparate transformer encoders within our approach helps to improve the overall effectiveness to a significant extent, achieving state-of-the-art performance in almost every metric. We make the code publicly available so that our work can be reproduced and used as a baseline for the community for future work in this domain.
UCD-CS at TREC 2021 Incident Streams Track: In recent years, the task of mining important information from social media posts during crises has become a focus of research for the purposes of assisting emergency response (ES). The TREC Incident Streams (IS) track is a research challenge organised for this purpose. The track asks participating systems to both classify a stream of crisis-related tweets into humanitarian aid related information types and estimate their importance regarding criticality. The former refers to a multi-label information type classification task and the latter refers to a priority estimation task. In this paper, we report on the participation of the University College Dublin School of Computer Science (UCD-CS) in TREC-IS 2021. We explored a variety of approaches, including simple machine learning algorithms, multi-task learning techniques, text augmentation, and ensemble approaches. The official evaluation results indicate that our runs achieve the highest scores in many metrics. To aid reproducibility, our code is publicly available.
UCD-CS at W-NUT 2020 Shared Task-3: A Text to Text Approach for COVID-19 Event Extraction on Social Media: In this paper, we describe our approach in the shared task: COVID-19 event extraction from Twitter. The objective of this task is to extract answers from COVID-related tweets to a set of predefined slot-filling questions. Our approach treats the event extraction task as a question answering task by leveraging the transformer-based T5 text-to-text model. According to the official evaluation scores returned, namely F1, our submitted run achieves competitive performance compared to other participating runs (Top 3). However, we argue that this evaluation may underestimate the actual performance of runs based on text-generation. Although some such runs may answer the slot questions well, they may not be an exact string match for the gold standard answers. To measure the extent of this underestimation, we adopt a simple exact-answer transformation method aiming at converting the well-answered predictions to exactly-matched predictions. The results show that after this transformation our run overall reaches the same level of performance as the best participating run and state-of-the-art F1 scores in three of five COVID-related events. Our code is publicly available to aid reproducibility
UCD SIFT in the TREC 2009 Web Track: The SIFT (SIFT Information Fusion Techniques) group in UCD is dedicated to researching Data Fusion in Information Retrieval. This area of research involves the merging of multiple sets of results into a single result set that is presented to the user. As a means of evaluating the effectiveness of this work, the group entered Category B of the TREC 2009 Web Track. This paper discusses the strategies and experiments employed by the UCD SIFT group in entering the TREC Web Track 2009. This involved the use of freely-available Information Retrieval tools to provide inputs to the data fusion process, with the aim of contrasting with more sophisticated systems.
UCD SIFT in the TREC 2010 Web Track: The SIFT (Segmented Information Fusion Techniques) group in UCD is dedicated to researching Data Fusion in Information Retrieval. This area of research involves the merging ofmultiple sets of results into a single result set that is presented to the user. As a means of both evaluating the effectiveness of this work and comparing it against other retrieval systems, the group entered Category B of the TREC 2010 Web Track. This involved the use of freely-available Information Retrieval tools to provide inputs to the data fusion process. This paper outlines the strategies of the 3 candidate fusion algorithms entered in the ad-hoc task, discusses the methodology employed for the runs and presents a preliminary analysis of the provisional results issued by TREC.
UCD SIFT in the TREC 2011 Web Track: The SIFT (Segmented Information Fusion Techniques) group in UCD is dedicated to researching Data Fusion in Information Retrieval. This area of research involves the merging ofmultiple sets of results into a single result set that is presented to the user. As a means of both evaluating the effectiveness of this work and comparing it against other retrieval systems, the group entered Category B of the TREC 2011 Web Track. This involved the use of freely-available Information Retrieval tools to provide inputs to the data fusion process. This paper outlines the strategies of the 3 candidate entries submitted to compete in the ad-hoc task, discusses the methodology employed by them and presents a preliminary analysis of the results issued by TREC.
Using Pseudo-Labelled Data for~Zero-Shot Text Classification: Existing Zero-Shot Learning (ZSL) techniques for text classification typically assign a label to a piece of text by building a matching model to capture the semantic similarity between the text and the label descriptor. This is expensive at inference time as it requires the text paired with every label to be passed forward through the matching model. The existing approaches to alleviate this issue are based on exact-word matching between the label surface names and an unlabelled target-domain corpus to get pseudo-labelled data for model training, making them difficult to generalise to ZS classification in multiple domains, In this paper, we propose an approach called P-ZSC to leverage pseudo-labelled data for zero-shot text classification. Our approach generates the pseudo-labelled data through a matching algorithm between the unlabelled target-domain corpus and the label vocabularies that consist of in-domain relevant phrases via expansion from label names. By evaluating our approach on several benchmarking datasets from a variety of domains, the results show that our system substantially outperforms the baseline systems especially in datasets whose classes are imbalanced.
Winter Wheat Crop Yield Prediction on Multiple Heterogeneous Datasets Using Machine Learning: Winter wheat is one of the most important crops in the United Kingdom, and crop yield prediction is essential for the nation's food security. Several studies have employed machine learning (ML) techniques to predict crop yield on a county or farm-based level. The main objective of this study is to predict winter wheat crop yield using ML models on multiple heterogeneous datasets, i.e., soil and weather on a zone-based level. Experimental results demonstrated their impact when used alone and in combination. In addition, we employ numerous ML algorithms to emphasize the significance of data quality in any machine-learning strategy.
Research: Projects - completed, future and works-in-progress.
Final Year Project: My Final Year Project for a BA in Law in Accounting, titled "Reputation on the Line Online - An Investigation into the Tort of Defamation on the Internet", was submitted on February 15th, 2002.
FYP Research Proposal: Final Year Project Research Proposal submitted November 30th 2000
HDip Major Project: My Major Project as part of my HDip in Computer Science (2002/03) was submitted on April Fools' Day 2003. The project was to build a website on which it is possible to book tickets for a fictional airline, based on the sites of the likes of Ryanair and Aer Lingus. It was done using Java Servlets with a MySQL database.
HOTAIR: My MSc thesis was based on work I performed as part of the HOTAIR (Highly Organised Team of Agents for Information Retrieval) project, which was affiliated with the IIRG (Intelligent Information Retrieval Group) in UCD.
Agent Factory: I was a developer for the Agent Factory framework, which was a central tool in my research on Multi Agent Systems. Agent Factory is a modular and extensible framework that provides comprehensive support for the development and deployment of agent-oriented applications.
SIFT: SIFT (Segmented Information Fusion Techniques) was an SFI-funded project based in UCD. It was aimed at applying fusion techniques in the area of Information Retrieval.
ACRE: ACRE (Agent Conversation Reasoning Engine) is a project designed to furnish agent platforms (and, by extension, the agents themselves) with the facilities necessary for reasoning about conversations. As messages are sent and received, ACRE will match these to conversations (following pre-defined protocols) and generate appropriate beliefs that will allow the developer to manage communication more easily.
CONSUS: Crop Optimisation Through Sensing, Understanding and Visualisation: CONSUS is a collaborative research partnership between University College Dublin (UCD) and Origin Enterprises PLC that has been supported through the Science Foundation Ireland (SFI) Strategic Partnership Programme. The €17.6 million five-year project will investigate digital, precision agriculture and crop science through a strong multi and inter-disciplinary approach that combines the leading expertise of UCD in data science and agricultural science with Origin's integrated crop management research, systems, capabilities and extensive on-farm knowledge exchange networks.
TRANSPIRE: TRANSPIRE, a trained AI platform for regulation, that combines human expertise with artificial intelligence to demystify laws and regulations making it easier to do business while protecting consumers.