Mathematics Of Deep Learning Assignment Sample

Understanding the Mathematics of Deep Learning: Comprehensive Assignment Guide

  • 54000+ Project Delivered
  • 500+ Experts 24x7 Online Help
  • No AI Generated Content
- +
35% Off
£ 6.69
Estimated Cost
£ 4.35
10 Pages 2472Words

Introduction Of Mathematics Of Deep Learning

In this task, the objective is to carry out and prepare a deep learning network for transcribed digit acknowledgement. The task comprises four principal expectations: code, PDF report, MP4 sound video show, and an opposite part. The code part includes making a counterfeit brain network with one secret layer utilizing MATLAB. The organization ought to utilize the sigmoid initiation capability on all layers and the absolute squared blunder (TSE) as the presentation file. The organization will be prepared to group the MNIST digits as having a place with the understudy ID or not. The exhibition of the organization will be assessed utilizing a 2-by-2 confusion matrix on the test information. Then, the cross-entropy (XE) execution list will be presented, and the client will actually want to pick either TSE or XE as the presentation file. The preparation and assessment cycle will be rehashed, and another confusion matrix will be made. The code will be changed following considering math-based backpropagation, and the client will have the choice to pick between various backpropagation techniques. The preparation and assessment interaction will be rehashed for both TSE and XE, making two more confusing matrices. The number of hidden layers will then, at that point, be expanded to three, with various quantities of neurons in each layer. The whole cycle will be rehashed, bringing about a sum of eight confusion matrices.

Don't let assignment stress get the best of you! Trust New Assignment Help for all your academic needs. Our dedicated team offers exceptional assignment writing services in the UK, ensuring your success. Browse through our free assignment samples for valuable insights.

The report will incorporate a numerical portrayal of the five-layer neural network, the 2-by-2 confusion matrices with explanations, an examination of the outcomes, and a conversation on execution, responsiveness, and explicitness. Finally, the reverse part will include carrying out thoughts from the book "Inside the Mind of a Neural Network" and creating graphical pictures of the digits in the student ID.


The written report for this task intends to give an exhaustive outline of the executed deep learning network for transcribed digit recognition. It ought to incorporate numerical portrayals, correlation and analysis of the outcomes, and a conversation of the reverse component.

Mathematical Description of the Five-Layer Neural Network:

The five-layer neural network comprises five layers: an input layer, three hidden layers, and a result layer. The quantity of neurons in each layer is as follows:

  • The information layer has n neurons, where n addresses the number of info highlights.
  • The primary hidden layer has h1 neurons.
  • The second hidden layer has h2 neurons.
  • The third hidden layer has h3 neurons.
  • The resulting layer has m neurons, where m addresses the number of result classes or regression targets.

Every neuron in the hidden layers and the result layer plays out a weighted amount of the contributions from the past layer and applies an activation capability to create its result. Normal activation capabilities incorporate the sigmoid capability, tanh capability, or rectified linear unit (ReLU) capability (Lample and Charton 2019). The loads between the neurons in each layer are refreshed during the training process utilizing a streamlining algorithm like gradient descent. The specific weight update rule relies upon the picked streamlining algorithm however normally includes limiting a loss function that actions the error between the organization's anticipated results and the genuine results.

Figure 1: Checking the validity

Checking the validity

(Source: Self-created in Matlab)

This figure shows code that really takes a look at the legitimacy of specific data sources. It is ensuring that the upsides of N_ep, lr, information, bp and gfx are legitimate. If any of these qualities isn't substantial, a blunder message is printed.

In particular, N_ep and lr should be more noteworthy than 0. Information should be equivalent to 1 or 2. Bp should be equivalent to 1 or 2 (Ning and You 2019. ). In conclusion, gfx should be equivalent to 0. On the off chance that any of these circumstances aren't met, a proper mistake message is printed.

Confusion Matrices:

In the report, the performance of the neural network ought to be assessed utilizing 2-by-2 confusion matrices. Every confusion matrix ought to be marked with the particular hyperparameters utilized, for example, the actuation capability, execution file, and backpropagation technique. Other appropriate subtleties, remembering the number of hidden layers and neurons for each layer, ought to likewise be referenced. The obtained confusion matrices can then measure up and differentiated to evaluate the organization's exhibition for every design. Awareness and explicitness, two significant measurements in characterization assignments, ought to be referred to and examined according to the disarray frameworks. Responsiveness alludes to the capacity of the organization to accurately recognize positive occurrences, while explicitness estimates its precision in distinguishing negative cases. These measurements can give bits of knowledge into the organization's capacity to recognize various classes (Moen et al. 2019). By analyzing the confusion matrices and taking into account responsiveness and explicitness, the report can give a far-reaching assessment of the brain organization's exhibition across different designs, empowering a correlation of their viability and distinguishing the ideal arrangement for the given undertaking.

Figure 2: Code for the confusion matrix

Code for the confusion matrix

(Source: Self-created in Matlab)

This code makes a direct scope of numbers from two given scalars, d1 and d2. The client can likewise determine the number of components in the reach, n. On the off chance that n isn't indicated, it defaults to 100. On the off chance that d1 and d2 are of inverse signs, the code makes a variety of values from - (n-1) to n-1, duplicated by d2/n-1, and adds d1 and d2 onto the finish of the cluster. On the off chance that not, the code makes a variety of values from 0 to n-1, duplicated by (d2-d1)/n-1, and adds d1 and d2 onto the finish of the cluster. If d1 and d2 are equivalent, the cluster is loaded up with d1 (Nguyen et al. 2020). The result, y, is a variety of directly dispersed values somewhere in the range of d1 and d2.

Mathematical Description of Reverse and Graphical Output:

The reverse component of a brain network ordinarily includes producing yields in light of the learned portrayals in the organization. This interaction can be utilized for undertakings, for example, producing pictures or reproducing inputs. To implement the thoughts from the book [MYO], you would have to adhere to the guidelines and calculations given in the book. The particular execution subtleties might change relying on the book's substance. It is significant to allude to the book for the bit-by-bit directions on the most proficient method to create pictures in light of the learned portrayals. In the report, the subsequent graphical result, explicitly the produced pictures of the digits in the understudy ID, ought to be incorporated. These pictures ought to be appropriately commented on to guarantee lucidity and understanding for the peruser, perhaps showing which digit each picture addresses (Zhang et al. 2021). By adhering to the directions from the book and including the created pictures, the report can give a complete comprehension of the opposite part and the particular execution subtleties examined in [MYO].

Figure 3: Obtaining the raw training data

Obtaining the raw training data

(Source: Self-created in Matlab)

This code is setting up the preparation information to be utilized for a brain organization. It first loads the information from the MNIST dataset. Then it sets up the factors for information sources and results, as well as the loads and predispositions. It additionally sets up the sigmoid enactment capabilities for the covered-up and yield layers. At last, it sets up the one-hot encoding to be utilized for the results.

Speculation on the Mathematical Validity and Weaknesses:

In the report, a speculative discussion on the numerical legitimacy of the thoughts introduced in the opposite part ought to be incorporated. It is essential to fundamentally examine and assess the methodology recommended in [MYO] to recognize any possible shortcomings or constraints. One potential weakness could be the absence of point-by-point numerical defence or formal confirmations for the proposed thoughts. It is fundamental to guarantee that the numerical establishments and presumptions of the converse part are sound and upheld by a thorough examination (Elton et al. 2019). Without such approval, there might be questions about the dependability and precision of the produced yields. Besides, the report can investigate likely regions for development or further examination. This could include exploring elective calculations or strategies that might improve the presentation or productivity of the converse part. Furthermore, recognizing any restrictions regarding versatility, computational intricacy, or generalizability can give significant bits of knowledge to future turn of events and investigation. By directing a careful investigation and scrutinizing of the proposed thoughts, the report can add to the comprehension of the numerical legitimacy of the opposite part, featuring its assets, shortcomings, and roads for development.

This code is utilized to set up and assess a least-squares versatile channel. It takes a boundary called Nh, which is the request for the channel, and a boundary called cf, which is the presentation list to be utilized in the assessment. On the off chance that Nh isn't more noteworthy than 0, a mistake is tossed (Sejnowski 2020). If the worth of cf isn't either 1 or 2, a mistake is tossed. The segments that begin with "if" are utilized to check for specific circumstances and toss mistakes if those conditions are not met. The order "calf; close all; set(0, 'DefaultFigureWindowStyle', 'docked')" is utilized to clear the figure and dock the figure window.


The figure Result of minsttest100384 made in MATLAB is a plot of the exactness of a brain network prepared on a subset of the MNIST informational index. The figure shows that the precision of the brain network is expanding over the long haul as it is prepared on the informational collection. The x-hub addresses the quantity of preparing ages, while the y-hub addresses the exactness of the brain organization (Higham and Higham 2019). The plot shows that the exactness expands up to 89.56% after 100 preparation ages. This shows that the brain network can gain from the informational index and can group the pictures accurately with high precision.

This figure is a diagram made by running the minsttest100393 program in Matlab. The diagram shows the exactness of the program in perceiving manually written digits inside the MNIST dataset. The x-axis shows the quantity of preparing pictures utilized, and the y-axis shows the precision of the program (Zhao et al. 2022). As the quantity of prepared pictures builds, the precision of the program's expectations increments.


In summary, the five-layer neural network is a feedforward design with an info layer, three secret layers, and a result layer. The neurons in each layer play out a weighted amount of data sources and apply an enactment capability, and the loads are refreshed during preparation to improve execution. The discussion part of the report gives an amazing chance to dive further into the ideas, strategies, and results acquired throughout the task. It ought to exhibit an unmistakable comprehension of the carried-out calculations and their effect on the organization's presentation. Also, any bits of knowledge acquired, challenges confronted, and ideas for future upgrades can be examined. The report ought to be very much organized, and coherently coordinated, and give a compact yet extensive investigation of the accomplished outcomes. By effectively conveying the implemented deep learning organization, breaking down the disarray grids, giving numerical depictions, and fundamentally assessing the converse part, the report will exhibit areas of strength for an of the task's goals and grandstand the writer's capability in carrying out and preparing brain networks for written by hand digit acknowledgement.



  • Elton, D.C., Boukouvalas, Z., Fuge, M.D. and Chung, P.W., 2019. Deep learning for molecular design—a review of the state of the art. Molecular Systems Design & Engineering, 4(4), pp.828-849.
  • Higham, C.F. and Higham, D.J., 2019. Deep learning: An introduction for applied mathematicians. Siam review, 61(4), pp.860-891.
  • Lample, G. and Charton, F., 2019. Deep learning for symbolic mathematics. arXiv preprint arXiv:1912.01412.
  • Lu, L., Meng, X., Mao, Z. and Karniadakis, G.E., 2021. DeepXDE: A deep learning library for solving differential equations. SIAM review, 63(1), pp.208-228.
  • Moen, E., Bannon, D., Kudo, T., Graf, W., Covert, M. and Van Valen, D., 2019. Deep learning for cellular image analysis. Nature methods, 16(12), pp.1233-1246.
  • Nguyen, D.D., Gao, K., Wang, M. and Wei, G.W., 2020. MathDL: mathematical deep learning for D3R Grand Challenge 4. Journal of computer-aided molecular design, 34, pp.131-147.
  • Ning, C. and You, F., 2019. Optimization under uncertainty in the era of big data and deep learning: When machine learning meets mathematical programming. Computers & Chemical Engineering, 125, pp.434-448.
  • Qu, X., Huang, Y., Lu, H., Qiu, T., Guo, D., Agback, T., Orekhov, V. and Chen, Z., 2020. Accelerated nuclear magnetic resonance spectroscopy with deep learning. Angewandte Chemie, 132(26), pp.10383-10386.
  • Sejnowski, T.J., 2020. The unreasonable effectiveness of deep learning in artificial intelligence. Proceedings of the National Academy of Sciences, 117(48), pp.30033-30038.
  • Shlezinger, N., Whang, J., Eldar, Y.C. and Dimakis, A.G., 2023. Model-based deep learning. Proceedings of the IEEE.
  • Zhang, A., Lipton, Z.C., Li, M. and Smola, A.J., 2021. Dive into deep learning. arXiv preprint arXiv:2106.11342.
  • Zhao, Y., Liu, Q., Wu, X., Zhang, L., Du, J. and Meng, Q., 2022. De novo drug design framework based on mathematical programming method and deep learning model. AIChE Journal, 68(9), p.e17748.
35% OFF
Get best price for your work
  • 54000+ Project Delivered
  • 500+ Experts 24*7 Online Help

offer valid for limited time only*