The 8th International Conference on Electronic Technology and Information Science(ICETIS 2023)

Keynote Speakers (Leeds)

人员2.pngProf. Yanguo Jing

Leeds Trinity University, UK

             Yanguo Jing-116x160.jpg


Bio: Professor Dr Yanguo Jing is the Dean of the Faculty of Business, Computer Science and Digital Industries, Leeds Trinity University. He is a Professor of Artificial Intelligence. He has a PhD (Heriot-Watt University, UK), a MSc, a 1st class BSc (Hons) in Computer Science and a PGCert in Learning and Teaching in Higher Education. He has over 20 years’ teaching, research and enterprise experience in the UK and in China. Yanguo is a Certified Management & Business Educator, a Fellow of the British Computer Society, a Charter IT professional and a Principal Fellow of the Higher Education Academy in the UK. Prof. Jing’s prime research interests are AI and big data. His recent research work focuses on the use of machine-learning methods to capture interaction and user behaviour patterns that can be used to develop intelligent applications. This research has been applied in applications such as business analytics, sports analytics, and user behaviour pattern recognition in social networks and extra-care/ Assisted Living settings. He participated in several research, KTP and consultancy projects with sponsors and clients such as Cadent Gas, Pfizer, Welsh Government, KPIT, UK’s Comic Relief charity and JISC in the UK.


Title: The use of machine learning to predict technical skills in youth grassroots soccer

Abstract: The aim of this study was to determine the contributors to football technical skill in grassroots youth football players using a machine learning approach. Machine learning models are used to predict technical skill. A recursive feature elimination method was used to eliminate the worst performing features using linear regression and ridge regression. Five machine learning models (linear, ridge, lasso, random forest and boosted trees) were used in the study. Results from the machine learning analysis indicated that total Fundamental Movement Skills (FMS) score (0-50) was the most important feature in predicting technical soccer skill closely followed by coach rating of child skill for their age, years playing experience and Age at Peak Height Velocity (APHV). Using a random forest, technical skill could be predicted with 99% accuracy in boys who play grassroots soccer, with FMS being the most important contributor. Coaches at grassroots level, should therefore be mindful of the importance of FMS for technical skill in youth players.



人员2.pngDr. Xin Lu

Bournemouth University, UK


             image.png


Bio: Dr Xin Lu is a Senior lecturer in Computer Science at Bournemouth University. Prior to this, he was a lecturer at Coventry University. Xin received his BSc degree in Electronic science and technology at Beijing Institute of Technology. He received his MSc degree in Electronic, Electrical and System Engineering and PhD degree in Computer Science at Loughborough University. Xin has more than 10-year’s research experience in designing Big data analytics and deep learning enabled IoTs for both Smart factory and Smart home applications. His current research areas are within Big Data analysis and Deep Learning for Smart Systems in the areas of Intelligent Manufacturing, Digital Healthcare, and Smart City, Intelligent Edge/Fog enabled IoTs for Smart systems and human-robot collaboration for Intelligent manufacturing and digital Health.


TitleBig data analysis for intelligent manufacturing

Abstract: Big data analytics have been widely used to empower intelligent systems for sustainable development applied to environment, industry and economy due to its ability to efficiently discover hidden patterns, correlations, and valuable insights from large datasets. Due to increasingly customised manufacturing, unpredictable ambient working conditions in shop-floors and stricter requirements on sustainability, it is challenging to achieve efficient optimisation for production lines. In this talk, big data analytics enabled intelligent manufacturing systems have been selected as an example to illustrate how enterprises can be benefited by using this advanced IT technology. Several real-world case studies will be introduced to show production lines can be greatly enhanced their performance through optimising the shop-floor planning, decision making and reducing the operation costs by implemented the big data technology.



人员2.pngDr. Yashar Baradaranshokouhi



TitleApplications of Biomedical Signal Processing on Patients with Epilepsy and Seizure

Abstract: Epilepsy is a neurological condition that is associated with the cause of repeated seizures that starts in the brain. There are over 40 types of seizures established and can vary from one person to another. Some may feel a few blank seconds and then wander around confused while others may collapse with severe shakes (Epilepsy  Society, 2021).

There are over 600,000 individuals suffering from epilepsy in the UK and every day 87 people are diagnosed with epilepsy in the UK (Epilepsy  Facts and Terminology - Epilepsy Action, 2023). Epilepsy affects their life in different aspects from their employability, driving, and day-to-day activities, and in some cases, fatal. Epilepsy is among the most common long-term neurological condition in children, affecting more than 112,000 children and young adults in the UK(Hargreaves et al., 2019).

The estimated cost of established epilepsy cases within the UK has been in excess of 1.9 billion pounds, where 69% of the costs were due to indirect costs such as unemployment and excess mortality(Cockerell et al., 1994).

The current research explores the applications of gaussian estimation frameworks, advanced signal processing, and complex mathematical and computational modelling of the human brain’s electrical activity. The research aims to analyse human Electrocardiography (EEG) recordings, predict the type of seizure, extract features from EEG data used for the prediction of an epileptic seizure, and provide insight into the role of heterogeneous neural connectivity in forming different types of seizures.

A first-order and second-order neural field model has been used to generate the synthetic data. The Extended Kalman Filter (EKF) has been applied as the gaussian estimation framework to obtain the connectivity kernel parameters in the homogenous and heterogenous connections among brain cell tissues. The results of the estimation framework have been validated using mathematical methods such as Monte-Carlo simulation to assure the convergence of the results(Freestone et al., 2011).

The research could have a potential contribution to the prediction of seizures, improve treatment from early stages, and hence, improve the quality of life of individuals with the medical condition.



Anurag Upadhyay (PhD Candidate)人员2.png



TitleA Deep-Learning-Based Approach for Aircraft Engine Defect Detection

Abstract: Engine or powerplant is one of the most critical components on an aircraft and is subjected to regular maintenance to ensure that the aircraft is in a state of continued airworthiness. One of the most used inspection procedures is the borescope inspection.The current practice for borescope inspection involves inserting a flexible camera into the borescope inspection port on the engine, the images captured by the camera are then visible on a portable monitor These images are then manually inspected by the technician for any signs of damages.

Borescope inspection is a labour-intensive process used to find defects in aircraft engines that contain areas not visible during a general visual inspection. The outcome of the process largely depends on the judgment of the maintenance professionals who perform it. This research develops a novel deep learning framework for automated borescope inspection. In the framework, a customised U-Net architecture used for instance segmentation is developed to detect the edge defects on high-pressure compressor blades.

Since motion blur is introduced in some images while the blades are rotated during the inspection, it is important to handle the motion blur before giving the images as input to the segmentation model, for this a hybrid motion deblurring method for image sharpening and denoising is applied to remove these effects. The hybrid model is based on classic computer vision techniques in combination with a customised GAN model. The framework also addresses the data imbalance, small size of the defects and data availability issues in part by testing different loss functions and generating synthetic images using a customised generative adversarial net (GAN) model, respectively.

The results obtained from the implementation of the deep learning framework achieve precisions and recalls of over 90%. The hybrid model for motion deblurring results in a 10x improvement in image quality. However, the framework only achieves modest success with particular loss functions for very small sizes of defects.

The overall finding of the research indicates that an automated approach can help in improving the overall efficiency of the inspection process and reducing the required time of operation. The future study will focus on very small defects detection and extend the deep learning framework to detect different types of edge and surface damages captured through the borescope inspection.



人员2.png

U. Arif and N. Danino



TitleEnhancing Diamond Valuation through Artificial Intelligence and Machine Learning

Abstract: This paper introduces a novel approach to diamond appraisal by proposing an artificial intelligence (AI) model that predicts the value of loose diamonds.

Diamond appraisal is a crucial process in the diamond industry, as it helps determine the value and insurance premiums of loose diamonds. However, traditional appraisal methods can be time-consuming, subjective, and prone to error. This is where the proposed AI diamond appraisal tool comes in. By leveraging machine learning techniques, the tool can provide a faster, more objective, and accurate way of appraising diamonds.

To develop the AI model, a large dataset of diamond features and their corresponding prices was collected. The data was then pre-processed by removing any missing values, outliers, and irrelevant features. Next, exploratory data analysis (EDA) techniques were applied to visualise the data and gain insights into its distribution and relationships.

After EDA, the data was split into training and testing sets, and multiple regression models were built using different algorithms such as linear regression, decision trees, and random forests. The models were evaluated using metrics such as MAE, MSE, RMSE, R2, and adjusted R2 scores, and the best performing model was selected based on these metrics.

To ensure the model's accuracy and prevent overfitting, an approach was applied using the variance inflation factor (VIF) with the aid of a confusion matrix to detect and remove features that did not have a significant impact on diamond prices. This approach also helped identify multicollinearity among independent variables and visualise the strength of the correlation coefficients. This improved the accuracy of the model by selecting the most relevant features and reducing the risk of overfitting.

The resulting model achieved outstanding performance, with low values of MAE, MSE, and RMSE, indicating a small prediction error. Additionally, high values of R2 and adjusted R2 scores indicated that the model could explain most of the variance in diamond prices. Additionally, 5-fold cross validation showed that the model was generalisable to unseen data and had a low standard deviation value, indicating that the model is consistent in its performance across different subsets of the data. Moreover, the fact that the model's testing performance exceeds that of its training, indicates that it is generalisable and not overfit.

To further improve the model's performance, future research could explore the inclusion of other diamond shapes such as emerald, princess, and cushion cut diamonds. Additionally, experimentation with different regression techniques such as gradient boosting, data augmentation, and support vector machines, as well as optimisation algorithms and tuning methods could improve the model's performance further.

The proposed AI diamond appraisal tool has the potential to revolutionise the diamond industry by providing a more objective and accurate way of assessing diamond values. The tool's accessibility via a user-friendly web tool makes it accessible to anyone interested in appraising diamonds, regardless of their expertise in the field.



人员2.png

Dr Aliyu Lawal Aliyu and Jim Diockou



TitleAn Analytical Queuing Model based on SDN for IoT Traffic in 5G

Abstract: The latest mobile and wireless communication technology 5G will revolutionise the way we communicate and interact in the digital world. 5G is expected to have a large-scale impact on society, industries and the digital economy. The technology will unleash an ecosystem that enables Ultra-Reliable Low Latency Communication (URLLC) and massive Machine-Type Communication  (mMTC), this will heavily benefit IoT devices. However, despite the lucrative advantages offered by 5G, the network infrastructure and operations will come with huge financial cost making capital expenditure (CAPEX) and operational expenditure (OPEX) an issue. With the advent of Software Defined Networking (SDN) and Network Function Virtualisation (NFV), most of the financial burden can be reduced through virtualisation of the access network infrastructure (eNodeB, gNodeB ), these access networks send traffic from ubiquitous IoT devices to IP network switches. Considering the massive machine-type traffic and the need for URLLC, we need an efficient queuing model that can cater for the network packets in transit. This paper proposes an analytical Markovian queuing model based on M/M/C/∞/∞) to offer efficient and scalable traffic engineering for the massive traffic that transit via the 5G access networks to SDN architecture. The SDN controller and NFV will be used to implement the Markovian queuing model and to intelligently route the traffic efficiently that comes from the various 5G access networks to their final destination and egress point through the use of virtual switches.



人员2.png

Dr Antesar M. Shabut, Katherine Blair and Jim Diockou



TitleGlobalizer App – A Tool for Internationalising Curriculum and Finding Partners Across Higher Education Institutions

Abstract: This research aims to design and develop a mobile app called “Globalizer App” which facilitates finding international partners across higher education institutions in order to internationalise the curriculum across Higher Education Institutions.

It works like the dating app Tinder: swipe right if you see a project that looks “attractive” (Tinder Elo, 2021). Academics who are looking to internationalise their curriculum can advertise their projects or class assignments to look for international partners. Those potential partners can ‘swipe right’ to find out more and make contact. This project aligns with the UK HE global outlook strategy and supports students to recognise the benefit of global engagement and international experience. Development has been through various phases including user research, building the Globalizer app and user testing.

Research shows students engage more with the subject and improve their performance when they work with students in other countries. They experience other cultures and viewpoints without having to travel there – democratising the international experience for all. This experience could be facilitated by universities to enrich the student experience, developing an understanding of the global economy, global politics, and global cultures and cultural exchanges. According to Abreu et al (2008), “universities are central generators and repositories of knowledge in our society”. The frameworks, methodologies and processes used to generate and apply that knowledge impact not only universities as educational and research institutions but their cultural richness as well as that of their social and economic environments in terms of creativity, learning and performance (Ahmad and Karim, 2019). Bringing these generators and repositories of knowledge together in the form of knowledge exchange defined as “a process which brings together academic staff, users of research, and wider groups and communities” (Solent University, n.d.) can only increase their above-mentioned impact. To maximise the benefit of these platforms that shall facilitate the full expression of knowledge exchange should be globalised, culturally relevant and contextualised systems of learning and teaching that are useful for both local and international students. It is within this context that this research analyses the current literature to find out about the design, development, release, and/or evaluation of Knowledge Exchange and internationalisation of curriculum tools and applications that facilitate and continue to nurture knowledge exchange within academia, locally, nationally, and globally before designing, developing, and release one that will benefit its stakeholders.