Artificial Intelligence (MDA522 Assignment 2) Assignment Help

Artificial Intelligence (MDA522 Assignment 2) Assignment Help

GenAI 

In this assignment, the use of GENERATIVE AI (GenAI) IS PROHIBITED. 

Purpose of the Assignment:  

This assignment seeks to equip students with many desirable skills and knowledge in artificial  intelligence (AI) and related disciplines, and to fulfill the intended learning outcomes of the unit. In  particular, students are expected to use AI models, techniques and tools to solve complex real-life  problems which are topical and relevant to modern industry. Students are grouped into 4 members  each. All groups will work on the same general topic in a competitive manner, use similar tools to  acquire similar set of skills. The task is done by the groups in a competitive manner in the sense that it will judge as the winner the group whose deep learning model detects the target with the highest predictive accuracy and lowest error. The project also has a research component besides practical  programming in Python. By working in groups, students also acquire teamwork skills. 

Project/Task Background: 

This is a follow-up to Assignment 1 to enable students to further develop their skills in the applications  of AI in computer vision. The project in this assignment is titled “Development of AI Algorithms to  Detect Explosives in Waste Receptables Using Vision Transformers”. Some cities have witnessed the  concealment of improvised explosive devices (IEDs), or explosives, in public waste receptacles by  criminals to destroy people and property. This menace is causing mass removal of dustbins from public  places and public transport in cities by city councils the world over as a strategy to solve the problem  [1]. The consequence is littering to cause stench, pests, contamination of water bodies, spread of  diseases and increased greenhouse gases.  

The current solutions to the problem are blast-resistant and transparent dustbins. However, none of  these approaches is optimal. First, none of them inherently catches the criminals to deter the act.  Second, the awful appearance of rubbish is visible with transparent bins, while blast-resistant dustbins  are bulky and costly.  

AI algorithms used in computer vision have been historically dominated by convolutional neural  networks (CNNs). Currently, the application of Vision Transformer (ViT) in computer vision is  increasing. ViT is a model for image classification that employs a transformer-like architecture over  patches of the image. A ViT converts an image into a sequence of non-overlapping patches, similar to  how language models (e.g. transformers, LSTM and RNNs) handle text. Each patch is then transformed  into a vector using a Large Language Model (LLM) and processed within a transformer architecture.  ViTs are capable of grasping global information within images, transcending the limitations of local  feature extraction performed by CNNs.

This assignment seeks to motivate students to develop a novel approach to protect dustbins from  illicit use as we maintain them for their traditional role. This is a new research area so there may not  be much available literature or datasets. The project is an intersection of AI, IoT and environmental  security. However, the task will focus on the design of AI algorithms. 

Methodology: 

Students are expected to build AI/ML/DL models to be implemented in waste receptacles or rubbish  bins to detect if rubbish thrown into a waste bin is a real waste or a potential explosive. They should  search and review existing algorithm and techniques used. Then, attempt to design a new  algorithm/model that is better in some sense than the existing ones, if any. Object detection is an  image processing task under computer vision. Tasks involved include:  

1) Search for an existing database of images of waste/garbage, and features of waste and  detonations or IEDs to build a new database.  

2) Through an extensive search select the appropriate AI algorithms and Python libraries used  for image classification.  

3) Use the tools and database constructed to train AI/DL/ML models based on ViTs to classify  waste into either:  

(a) 2 groups: real waste/garbage or IED. This will build a deep learning model for binary  classification. 

(b) More than 2 groups: real waste/garbage sorted into classes and IED being another class.  This will build a deep learning model for multi-class classification. 

Project Objectives and Deliverables: 

The project falls under object detection in computer vision, and the following are the two deliverables  or milestones expected in this project: 

• Use the existing database constructed in [1], extend the database in [1] or  construct a new database of IED images which is good enough, regarding veracity,  volume and value, to solve the target problem. 

• Construct of a deep learning model based on ViTs, using Keras in Jupyter  Notebook, which can detect IEDs (explosives) thrown into a waste receptacle. This  is the target problem. 

Needed Tools: 

• Laptop (students should come to class with their laptops) 

• Python, Numpy, SciPy, Matplotlib, scikit-learn, Keras/Tensorflow, etc. running in  Jupyter Notebook in Anaconda Python distribution. 

Submission Instructions: 

1) Each group shall submit only one report to the appropriate Moodle link for marking and  grading.  

2) Deliverables: the report must be in the following three (3) formats:

a. Groupk_Ass2.html: This is the programming/practical part which should be submitted in a single Jupyter Notebook under the name: 

File → Download as → HTML (.html) 

By executing from Jupyter Notebook. The Jupyter Notebook must use the Markdown  markup language to explain the solution to the task concisely but clearly. 

b. Groupk_Report_Ass2.doc: This is the file containing the project report. All figures  and results in the Jupyter Notebook must be captured in this report. 

c. Groupk.ppt: This is the file that summarizes the project report to be presented in  class in Week 11. 

where k is the Group number. 

Note: Moodle cannot any file exceed 200 MB. Such files should be stored on a cloud  platform and loaded into the Jupyter Notebook in a way that can be run without any further  ado. 

3) Format of group report (i.e. Groupk_Report_Ass2.doc). Each group report must be formatted  as IEEE Journal paper with a cover page showing names and Student ID of students in group.  The cover page should also contain a table showing the contributions of each student to the  project and the reports. The sections of the main report should use the titles and numbering: 

a. Title 

b. Abstract 

c. I. Introduction/background 

d. II. Literature review (or related works) 

(This section analyses works completed and published that relate the topic under  review. Cited works should include at least 5 peer-reviewed journal papers) 

e. III. Materials and Methods (aka Research Methods) 

A. Data Used for the Model Training 

(This subsection should include the source of the data and its  

description/features)

B. Method/Algorithms/Models Used (the VIT model/s) 

C. Experimental Setup 

D. Model Training and Experiments 

f. IV. Results and discussions 

[This section documents a thorough comparison of the performance of any alternative methods used, or alternative values for model parameters, etc. How  results could be improved, etc.] 

g. V. Conclusions 

h. Acknowledgment 

i. References  

(List of ONLY works cited in the body of the project report and presentation slides.  IEEE referencing and bibliographic style must be perfectly adhered to.) 

• Note that some of the sections in the report should not be numbered, namely Title,  Abstract, Acknowledgement (if any) and References. 

• Use illustrations in the form of figures and tables (copied from Jupyter Notebook) wherever  they deem fit. Learn from the related works that you study as you do the project. If you read  one paper per week, you would have studied at least 11 papers by the time the project is  completed. 

Final grade = 0.3*Presentation Grade + 0.7*(grade for project report and Jupyter Notebook grade) 

Marking Guide for the Oral Presentation 

Each group shall present a summary of its work in class in Week 11. The presentation shall be graded  using the criteria in the table below.

Category 

Scoring Criteria 

Total  

Points

Score

Organization 

(15 points)

The type of presentation is appropriate for the topic and  audience.

3

Information is presented in a logical sequence. 

3

Presentation appropriately cites requisite number of  references.

3

Content 

(45 points)

Introduction is attention-getting, lays out the problem well,  and establishes a framework for the rest of the presentation.

3

Technical terms are well-defined in language appropriate for  the target audience.

3

Presentation contains accurate information. 

5

Material included is relevant to the overall  

message/purpose.

5

Appropriate amount of material is prepared, and points  made reflect well their relative importance.

5

There is an obvious conclusion summarizing the  presentation.

3

Presentation 

(40 points)

Speaker maintains good eye contact with the audience and  is appropriately animated (e.g., gestures, moving around,  etc.).

3

Speaker uses a clear, audible voice. 

3

Delivery is poised, controlled, and smooth. 

2.5

Good language skills and pronunciation are used. 

2.5

Visual aids are well prepared, informative, effective, and not  distracting.

2.5

Length of presentation is within the assigned time limits. 

2.5

Information was well communicated. 

5

Score 

Total Points 

54

Marking Criteria: Rubric for the Main Report 

Your submission will be marked using the criteria in this table. 

Success Level 

Criteria

Strongly 

Agree [6  

marks]

Agree  

[5  

marks]

Somewhat 

Agree [4  

marks]

Somewhat 

Disagree [3  marks]

Disagree  

[2 marks]

Strongly 

Disagree  

[1 mark]

[Abstract] 

The abstract (with length 200-250 words) includes background, literature review, research questions,  research methodology, results, and conclusions.

[Background/ Introduction] The background section  situates the study and explains its significance.

[Literature Review] The literature situates the  reported research in the context of work by other  researchers, identifies needed advances, and  establishes justification for the present study.

[Research Questions] The research questions is framed in terms of needed work identified in the  literature review and aligns that work with the  research methodology.

[Research Methodology] The research methodology  section consists of a comprehensive description of  how the study was executed. The section written in  such a manner others should be able to replicate the  study. While the structure typically includes  information on research design, sequence, collection, participants, and analysis, this section may vary  depending on the nature of the study. Finally, the  section must indicate that an Institutional Review  Board or its equivalent has approved the 

study if it contains human subjects or that it was  exempt.

[Results] The results section provides evidence that  the research questions have been addressed. Results  of quantitative studies should be reported according  to standards identified by the American 

Psychological Association in terms of sample size,  descriptive and inferential statistics, and 

effect size. The nature of the reporting may vary,  depending on the nature of the study.

[Conclusions] The conclusion section describes how  the results have answered the research questions  and discusses implications of the findings within the  larger context of professional communication. This section may also include study limitations and  directions for further research.

[References & in-text citation] Adheres strictly to  IEEE standards

[Spelling, grammar, cohesiveness and structure] No  spelling and/or grammar mistakes; structure of  report is perfectly that of an IEEE journal

Note: Research question should not stand a lone as a section, but integrated into the abstract,  introduction, literature review, results and conclusions.

References 

[1] A Gyasi-Agyei, “Detection of explosives in dustbins using deep transfer learning based multiclass  classifiers,” Appl. Intell. , vol. 54, no. January 2024, p. 2314–2347, 2024. 

[2] Gyasi-Agyei A (2023) I2Net: Database of IED Images for IED Detection in Public Waste Receptacles

Reference no: EM132069492

WhatsApp
Hello! Need help with your assignments? We are here

GRAB 25% OFF YOUR ORDERS TODAY

X