NLP303 Natural Language Processing And Speech Recognition Assessment

Subject Code and Title :- NLP303 Natural Language Processing And Speech Recognition
Assignment Type :- Assessment 1
Assessment :- Programming Task
Individual/Group :- Individual
Length :- Source code with 750 words (+/- 10%) report
Weighting :- 30%
NLP303 Natural Language Processing And Speech Recognition Assessment

NLP303 Natural Language Processing And Speech Recognition Assessment

Learning Outcomes :-
The Subject Learning Outcomes demonstrated by successful completion of the task below include:
a) Evaluate Natural Language Processing and Speech Recognition techniques.
b) Apply the theories and the frameworks of Natural Language Processing and Speech Recognition.
d) Develop solutions to Natural Language Processing and Speech Recognition problems

Assessment Task :-

This assessment is about exploring and investigating the use of the Hugging Face (HFace) open source Transformers library. You will demonstrate the value of the library through high-level Natural Language Processing (NLP) tasks and delivering NLP solutions.

Please refer to the Task Instructions for details on how to complete this task.

Context :-
This assessment recognises the tectonic shift taking place when approaching NLP projects in the 2020s. According to Sebastian Ruder, key global thought leader and research scientist in NLP with Google Deep Mind It only seems to be a question of time until pretrained word embeddings will be dethroned and replaced by pretrained language models in the toolbox of every NLP practitioner (Ruder, 2018). Together with the overwhelming demonstrations and transformer technology use by Open AI Google Facebook Microsoft and Baidu no reason at all exists for not considering this technology as part of any reasonably sized NLP projects i.e., beyond toy applications or research. Under standing transformer technology and implementations is essential for anyone who works in the NLP field. Therefore this transformer-centric assessment is essential to undertake in order to:

1. Help provide a context to keep up with the continual news updates on transformers and

2. Enter the workplace in any role associated with NLP if only to compare and contrast the new methods of conducting NLP tasks with the traditional fragile rules-based approach.

Completing this assessment will provide hands-on experience with how Transformer models fit into NLP projects as well as providing an under standing of the variety of models available from the Hugging Face Hub in particular acquiring a model and running typical NLP tasks.

The skills developed includes the use of open source transformers and ability to self direct learning towards practice with limited instruction. These two skills alone will garner the ability to under take innovative proof of concepts or a minimum viable project with limited funding while benefiting from the technology investments of tech giants.

This concise background information on transformer technology and importance of the Hugging Face open source Transformer library provides sufficient background as well as the scale and scope of these models and vocabulary required to under take this project before commencing projects in a learner or early professional role.

State of the art natural language processing tools build on the neural network architectures of the Transformer Vaswani et al., 2017. How ever two key approaches have helped to ensure that Transformer models have now become the de facto model for NLP. The first one is self-attention capturing dependencies between sequence elements.

Secondly the dominant and most important approach in NLP is transfer learning to pre-train models on large un labelled text corpuses in an un supervised manner, and then fine-tuning using a smaller task specific data set. Further evidence of the superiority of Transformers over previous component architectures of recurrent and convolutional neural networks is available by studying the leaderboard of NLP benchmarks

Hugging Face is a machine learning company supporting an open-source community for language model development. The Transformers library includes pre-trained models available from

The Transformers library NLP machine learning model pipeline follows the workflow:

process data → apply a model → make predictions

NLP303 Natural Language Processing And Speech Recognition Assessment

NLP303 Natural Language Processing And Speech Recognition Assessment

The library enables high level NLP tasks such as text classification name-entity recognition machine language translation summarisation question/answering and much more. However Transformers go well beyond handling NLP tasks. They also offer solutions such as text generation for autocompletion of stories as seen with the highly publicised transformer models Generative Pre-trained Transformer (GPT) v1-3.

To understand the variety of transformers beyond GPTs, other popular Transformer models available in open source include Google BERT (Bidirectional Encoder Representations from Transformers Facebook BART RoBERTa Robustly Optimised BERT Pre-training and T5 Text-to-Text Transfer Transformer). The Transformer models handle a large number of neural network parameters e.g., BERT (340 million parameters the GPT-3 model (175 billion), Switch scaling to 1.6 trillion (Fedus, Zoph & Shazeer, 2021, p. 17) and WuDao2.0 (1.75 trillion parameters Romero,2021). However this still falls short of the 1,000 trillion synapses (Zhang, 2019) in the human brain comparable to the biological equivalent of neural network parameters (Dickson, 2020).

A cheat sheet has been provided which encapsulates the progress of transformer architectures and variants together with parameter counts. This cheat sheet consolidates the transformer landscape into a single A3 poster (Sood, 2021). Please refer to the cheatsheet for guidance as you progress through the assessment.

Instructions :-
You will need to use a Google Colaboratory notebook (.ipynb) with Python to under take this assignment. Your Transformer model notebook benefits from using hardware acceleration and GPU runtime. In Colab select the menu option Runtime -> Change runtime type select Hardware Accelerator -> GPU and click SAVE.

You may find it extremely helpful to utilise the Markdown function available in the note book in order to document your code snippets generate comments and capture observations for your final assessment report as you go through your own notebook. You can find plenty of examples of well documented Transformer notebook files available online and in Github for you to review including Hugging Face notebooks

To enormously simplify your coding, use Hugging Face Transformers and pipeline. The pipeline contains many out of the box functions and models reducing your code to literally just a few lines. Another awesome benefit of using Transformers is simplicity!

Ensure the installation and import of Hugging Face Transformers and install/test the machine translation pipeline with the following note book code:

Ensure installation of HFace Transformers library

!pip install transformers

Install and test the pipeline import

from transformers import pipeline

Just one line of code here

translator = pipeline (“translation_en_to_de”)
print(translator(“The magic of transformers lies in pre-trained models”))

Beyond taking the steps outlined to prepare your notebook three multi-step activities are required in order to complete this assessment task. Remember that for ease of completion of this assignment and general good practice you should document your note book as you go using Markdown as you move through multi-stage activities 1 and 2.

For all the requested tasks and solution development you should use your own text as input rather than answers found elsewhere online. Hint: you might wish to adopt a theme from your favourite fiction book including characters to help efficiently provide answers for each set of activities comprising the assessment. Alternatively, you can source text from Project Gutenberg

Project Gutenberg is a library containing over 60,000 free e-books available in the public domain. The file formats include plain text.

Owing to the changing nature of the field and enhancements to models the publication cycle to implementation using new innovative Transformers has been considerably compressed. In light of this no text book for reference is prescribed, but the online documentation should be consulted for the latest release of Transformers

For some communities you might have to specify the search as Transformers NLP or Hugging Face Transformers. Otherwise you will end up wading through Transformers human like robot content relating to the film and TV series. If you have issues outside your control with any existing note books or your own first check out.

Activity 1: Programming Tasks for NLP Frequent Use — Cases with Hugging Face Transformers
Ensure Hugging Face Transformers is installed together with import of the pipeline. You will have hands-on experiences with state of the art pre-trained language models available with the open source Transformer library and be able to see the possibilities with one line of code at a time.

NLP303 Natural Language Processing And Speech Recognition Assessment

For this activity:
1. Specify a sequence of text and using the Transformers pipeline for Named Entity Recognition (NER) and identify a list of words belonging to at least one of three classes, e.g., person, an organisation or a location.

2. Again using your own sequence of text and Transformers pipeline identify at least two sequences of text of 10-25 words as positive or negative sentiment.

3. Use Transformers pipeline to summarise an article or sequence of text comprising 350 to 500 words or so into 100 words or less.

4. Illustrate text generation using the text generation pipeline and auto-complete 500 words from your starting point of just a few sentences

5. Extract an answer from your text given a question using the Transformers pipeline question-answering. Show the answer extracted from the text together with a confidence score with the positions of the extracted answer in the text.

6.Translate text of 3 to 5 sentences from English to French using the translation pipeline.

Ensure your note book Mark down commentary code and out put including any references for this activity are complete.

Activity 2: Programming Task for NLP Transformer Solutions

The Hugging Face model hub contains a large collection of search able pre-trained models on a range of NLP tasks datasets and metrics. These models can be used out of the box as you have already witnessed when completing Activity 1. In the model Hub you can try each model without coding though a simple interface and supporting documentation.The test drive capability builds on the HFace inference API built on top of pipeline.

Continue on with the same notebook from Activity 1.

NLP303 Natural Language Processing And Speech Recognition Assessment

For this activity :
1. Use the model Distilbert-base-uncased with a pipeline using your own example to illustrate masked language modelling. Here, the model generates text options to fill the masked input while mindful of a context of 5 to 10 words.

2. Locate and download the Prosus AI/finbert model from the HFace hub and select 3 to 5 stock market headlines to classify the sentiment of the financial content.

3. Download the Microsoft Dialo GPT-large model a large scale pretrained dialogue response generation model for multiturn conversations (Zhang et al, 2020). The model has been trained on 147M multi-turn dialogue from Reddit. Download the code snippet provided on the HFace hub to your note book make necessary changes/additions to the code and try chatting. Chat for 5 lines or more.

4. Review the Facebook Wav2Vec2 model (Wav2Vec2-Base-960h). This is a speech recognition model learning the structure of speech from raw audio. Create a wav audio file using the HFace hub interface to directly record your voice from browser. When satisfied with recording 10 words or so after playback through the same interface save as audio only. wav or what ever name you prefer. Use the code below for transcribing audio and load on to your note book and convert your audio to the text (speech recognition).

Source: Gautam, T. (2021, Feb). Introduction to Hugging Face’s Transformers v4.3.0 and its First Automatic Speech Recognition Model – Wav2Vec2. Analytics Vidhya.

This code can run

! pip install -q transformers

import librosa

import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Tokenizer

load model and tokenizer

tokenizer = Wav2Vec2Tokenizer.from_pretrained(“facebook/wav2vec2-base-960h”)
model = Wav2Vec2ForCTC.from_pretrained(“facebook/wav2vec2-base-960h”)

Since the base model is pre-trained on 16 kHz audio, we must make sure our audio sample is also resampled to a 16 kHz sampling rate.

Next we tokenize the inputs and make sure to set our tensors to PyTorch objects instead of python integers.

load any audio file of your choice

NLP303 Natural Language Processing And Speech Recognition Assessment

upload your audio file via left bar of colaboratory and use an appropriate path speech, rate = librosa.load(“/content/sample_data/audio only.wav”,sr=16000)

input_values = tokenizer(speech, return_tensors = ‘pt’).input_values

Store logits (non-normalized predictions)

logits = model(input_values).logits

Store predicted id’s

predicted_ids = torch.argmax(logits, dim =-1)

decode the audio to generate text

transcriptions = tokenizer.decode(predicted_ids[0])

print(transcriptions)

Ensure your notebook Markdown commentary code and output including any references for this activity are complete.

NLP303 Natural Language Processing And Speech Recognition Assessment

NLP303 Natural Language Processing And Speech Recognition Assessment

Activity 3: Report
Writing up your recent practical experiences with transformers will help you demonstrate skills and knowledge associated with:

  • Using transformers in NLP
  • Different types of NLP tasks
  • Ethics of NLP when using transformers to automatically generate writings

Beyond acquiring these skills, the reflection will help you determine if NLP is an area of interest for your future professional work.

NLP303 Natural Language Processing And Speech Recognition Assessment

For this activity:

1.Gather your code, outputs, comments and references from your notebook to form the body of your report of (approximately 500 words). Follow the chronology spelt out for Activity 1 and 2 to help structure and layout your report. Feel free to use any literature to highlight aspects of your report.

2. Conclude your report with a short reflection of 250 words or less covering any ‘Aha’ moments and what you found exciting or interesting e.g., a favourite big pre-trained model. Also you may have views you would like to share on the ethics of NLP given your recent experiences with auto-generation of text or chatting with a bot through multiple turns.

ORDER This NLP303 Natural Language Processing And Speech Recognition   NOW And Get Instant Discount

Order Your Assignment
Contact us on WhatsApp Assignment help payment - PayPal



Assignment Help In Australia - Essay Help
69 Kent St Millers Point NSW 2000 Australia
Phone : +61-3-6387-7039

Disclaimer: The papers provided by Assignmenthelpinaustralia.com serve as model papers for research candidates and are not to be submitted 'as is'. These papers are intended to be used for reference purposes only.
Assignmenthelpinaustralia.com only offers consultation and research support and assistance in research design, editing, and statistics.*