Thursday, 30 December 2021
Tuesday, 21 December 2021
Monday, 29 November 2021
Thursday, 21 October 2021
Sunday, 19 September 2021
AICTE Two days National Level Conference on “ Recent Trends in Informative Technology” on 22.09.2021.
AN ANALOGOUS STUDY BETWEEN WATERFALL AND INCREMENTAL SOFTWARE DEVELOPMENT LIFECYCLE MODEL
![]() |
S.Abirami, Head & Assistant Professor
![]() |
Department of Computer Application, Marudhar
Kesari Jain College
for Women,Tamil Nadu, India.
ABSTRACT
The purpose of this research is to measure the impact of compensation and workplace condition towards the job satisfaction Software development life cycle or SDLC for brief may be a methodology for planning, building, and maintaining info and industrial systems. There square measure numerous SDLC models wide used for developing software package. SDLC models provides a theoretical guide line concerning development of the software package. software package Development Life Cycle (SDLC) methodologies square measure mechanisms to assure that software package meet established needs. These methodologies impose numerous degrees of discipline to the software package development method with the goal of constructing the method additional economical and sure. SDLC models square measure vital for developing the software package in a very systematic manner such it'll be delivered among the time point and may even have correct quality. Every SDLC has its blessings and
to decide that model oughtto be enforced underneath that conditions. Within
the gift situation all software package systems square measure imperfect as a result of they can not be designed with mathematical or physical certainty. According SDLC every and each model has the advantage and disadvantages. The conception of system lifecycle models came into existence that stressed on the requirement to follow some structured approach towards building new or improved system. For this we want to match SDLC models. during this paper we are going to compare 2 completely different noted life cycle models like-waterfall model & progressive model.
Friday, 17 September 2021
Wednesday, 25 August 2021
Tuesday, 24 August 2021
Monday, 19 July 2021
ARTIFICIAL INTELLIGENCE
INTRODUCTION:
Artificial intelligence (AI) refers to the simulation of human
intelligence in machines that are programmed
to think like humans and mimic their actions. The term may also be applied to any machine
that exhibits traits associated with a human mind such as learning .
CHARACTERISTICS OF AI:
The ideal characteristic of artificial intelligence is
its ability to rationalize and take actions
that have the best chance of achieving a specific goal.A subset of artificial intelligence is machine learning, which refers to the concept that computer programs can automatically
learn from and adapt to new data without being
assisted by humans.Deep learning techniques enable this automatic learning
through the absorption of huge amounts of unstructured data such as text, images, or video.
CATEGORIZATION OF AI:
Artificial intelligence can be divided into two different categories:
weak .
Strong.
WEAK AI:
Weak artificial intelligence embodies a system designed to
carry out one particular job. Weak AI
systems include video games such as the chess example from above and personal assistants such as Amazon's Alexa and
Apple's Siri. You ask the assistant a question, it answers it for you.
STRONG AI:
Strong artificial intelligence systems are systems that carry
on the tasks considered to be human-like.
These tend to be more complex and complicated systems. They are programmed to handle situations in which they may
be required to problem solve without having a person intervene. These kinds of systems can be found in applications
like self-driving cars or in hospital operating rooms.
TYPES OF AI:
In late 2018, Gartner ran a survey
of more than 3000 CIOs on the
trends in digital business.
Among the many findings, they reported that, when asked
which technology CIOs expected to
be most disruptive, the most mentioned by a large margin was Artificial
Intelligence (taking the place of data and analytics, which moved down into second place.)
In fact, 37% percent of the leaders surveyed confirmed
that they had either already deployed AI into their business or that deployment was in short-term planning.
With AI set and ready to become a crucial part of human
daily life in the upcoming years, it’s important to understand what AI can actually do.
In fact, AI is a broad term covering several subsets or
types of artificial intelligence. These subsets
can be divided by the type of technology required – some require machine
learning, big data or natural
language processing (NLP), for instance. These subsets can also be differentiated by the level of
intelligence imbedded into an AI machine – more commonly known as a robot.
Reactive Machines:
Reactive machines are the simplest level of robot. They
cannot create memories or use information
learnt to influence future decisions – they are only able to react to presently existing situations.
IBM’s Deep Blue, a machine designed to play chess against
a human, is an example of this. Deep
Blue evaluates pieces on a chess board and reacts to them, based on pre-coded
chess strategies. It does
not learn or improve as it
plays – hence, it is simply
‘reactive’.
LIMITED MEMORY:
A limited memory machine, as the name might suggest, is
able to retain some information learned
from observing previous events or data. It can build knowledge using that
memory in conjunction with
pre-programmed data. Self-driving cars for instance store pre-programmed data – i.e. lane markings and maps,
alongside observing surrounding information such as the speed and direction
of nearby cars, or the
movement of nearby pedestrians.
These vehicles
can evaluate the environment around them and adjust
their driving as necessary. As technology evolves, machine
reaction times to make judgements have also become enhanced
– an invaluable asset in technology as potentially dangerous
as self-driving.
Theory of Mind:
Human beings have thoughts and feelings, memories
or other brain patterns that drive and influence their behaviour. It is based from this psychology that theory of mind researchers work, hoping to develop computers that are able to imitate human
mental models. That is – machines
that are able to understand that people and animals have thoughts and feelings
that can affect their own behaviour.
It is this theory of mind that allows humans to have
social interactions and form societies. Theory
of mind machines would be required to use the information derived from people
and learn from it, which would then
inform how the machine communicates in or reacts to a different situation.
A famous but still very primitive example of this
technology is Sophia, the world-famous robot
developed by Hanson Robotics, who often goes on press tours as an ever-evolving example to the public of what robots are
capable of doing. Whilst Sophia is not natively able to determine or understand human emotion, she can hold basic
conversation and has image recognition
and an ability to respond to interactions with humans with the appropriate
facial expression, as well as an incredibly human-like appearance
Researchers have yet to truly develop theory of mind
technology however, with criticisms of Sophia for instance being that she is simply “a chatbot with a face”.
Self-awareness:
Self-awareness AI machines are the most complex that we
might ever be able to envision and are described by some as the ultimate goal of AI.
These are machines that have human-level consciousness
and understand their existence in the world.
They don’t just ask for something they need, they understand
that they need
something; ‘I want a glass of water’ is a very different
statement to ‘I know I want a glass of water’.As a conscious being, this machine would not just know of
its own internal state but be able to
predict the feelings of others around it. For instance, as humans, if someone
yells at us we assume that that person is angry, because
we understand that is how we feel when we yell.
Without a
theory of mind, we would not be able to make these inferences
from other humans. Obviously,
self-aware machines are, at present, a work of science fiction and not
something that exist – and in fact,
may never exist. As it is, we’re probably best focusing on the development of machine learning in our AI.
A machine that has a memory,that can learn from
events in its memory and then can take that learning and apply it to future
decisions is the baseline of
evolution in Artificial Intelligence. Developing this will lead to AI
innovation that could turn society
on its head, enhance how we live in the day to day exponentially and even save lives.
ARTIFICIAL INTELLIGENCE
INTRODUCTION:
Artificial intelligence (AI) refers to the simulation of human
intelligence in machines that are programmed
to think like humans and mimic their actions. The term may also be applied to any machine
that exhibits traits associated with a human mind such as learning .
CHARACTERISTICS OF AI:
The ideal characteristic of artificial intelligence is its ability to rationalize and take actions that have the best chance of achieving a specific goal.A subset of artificial intelligence is machine learning, which refers to the concept that computer programs can automatically learn from and adapt to new data without being assisted by humans.Deep learning techniques enable this automatic learning through the absorption of huge amounts of unstructured data such as text, images, or video.
CATEGORIZATION OF AI:
Artificial intelligence can be divided into two different categories:
weak .
Strong.
WEAK AI:
Weak artificial intelligence embodies a system designed to carry out one particular job. Weak AI systems include video games such as the chess example from above and personal assistants such as Amazon's Alexa and Apple's Siri. You ask the assistant a question, it answers it for you.
STRONG AI:
Strong artificial intelligence systems are systems that carry
on the tasks considered to be human-like.
These tend to be more complex and complicated systems. They are programmed to handle situations in which they may
be required to problem solve without having a person intervene. These kinds of systems can be found in applications
like self-driving cars or in hospital operating rooms.
TYPES OF AI:
In late 2018, Gartner ran a survey
of more than 3000 CIOs on the
trends in digital business.
Among the many findings, they reported that, when asked
which technology CIOs expected to
be most disruptive, the most mentioned by a large margin was Artificial
Intelligence (taking the place of data and analytics, which moved down into second place.)
In fact, 37% percent of the leaders surveyed confirmed
that they had either already deployed AI into their business or that deployment was in short-term planning.
With AI set and ready to become a crucial part of human daily life in the upcoming years, it’s important to understand what AI can actually do.
In fact, AI is a broad term covering several subsets or types of artificial intelligence. These subsets can be divided by the type of technology required – some require machine learning, big data or natural language processing (NLP), for instance. These subsets can also be differentiated by the level of intelligence imbedded into an AI machine – more commonly known as a robot.
Reactive Machines:
Reactive machines are the simplest level of robot. They
cannot create memories or use information
learnt to influence future decisions – they are only able to react to presently existing situations.
IBM’s Deep Blue, a machine designed to play chess against
a human, is an example of this. Deep
Blue evaluates pieces on a chess board and reacts to them, based on pre-coded
chess strategies. It does
not learn or improve as it
plays – hence, it is simply
‘reactive’.
LIMITED MEMORY:
A limited memory machine, as the name might suggest, is
able to retain some information learned
from observing previous events or data. It can build knowledge using that
memory in conjunction with
pre-programmed data. Self-driving cars for instance store pre-programmed data – i.e. lane markings and maps,
alongside observing surrounding information such as the speed and direction
of nearby cars, or the
movement of nearby pedestrians.
These vehicles
can evaluate the environment around them and adjust
their driving as necessary. As technology evolves, machine
reaction times to make judgements have also become enhanced
– an invaluable asset in technology as potentially dangerous
as self-driving.
Theory of Mind:
Human beings have thoughts and feelings, memories
or other brain patterns that drive and influence their behaviour. It is based from this psychology that theory of mind researchers work, hoping to develop computers that are able to imitate human
mental models. That is – machines
that are able to understand that people and animals have thoughts and feelings
that can affect their own behaviour.
It is this theory of mind that allows humans to have
social interactions and form societies. Theory
of mind machines would be required to use the information derived from people
and learn from it, which would then
inform how the machine communicates in or reacts to a different situation.
A famous but still very primitive example of this
technology is Sophia, the world-famous robot
developed by Hanson Robotics, who often goes on press tours as an ever-evolving example to the public of what robots are
capable of doing. Whilst Sophia is not natively able to determine or understand human emotion, she can hold basic
conversation and has image recognition
and an ability to respond to interactions with humans with the appropriate
facial expression, as well as an incredibly human-like appearance
Researchers have yet to truly develop theory of mind
technology however, with criticisms of Sophia for instance being that she is simply “a chatbot with a face”.
Self-awareness:
Self-awareness AI machines are the most complex that we might ever be able to envision and are described by some as the ultimate goal of AI.
These are machines that have human-level consciousness
and understand their existence in the world.
They don’t just ask for something they need, they understand
that they need
something; ‘I want a glass of water’ is a very different statement to ‘I know I want a glass of water’.As a conscious being, this machine would not just know of its own internal state but be able to predict the feelings of others around it. For instance, as humans, if someone yells at us we assume that that person is angry, because we understand that is how we feel when we yell.
Without a theory of mind, we would not be able to make these inferences from other humans. Obviously, self-aware machines are, at present, a work of science fiction and not something that exist – and in fact, may never exist. As it is, we’re probably best focusing on the development of machine learning in our AI. A machine that has a memory,that can learn from events in its memory and then can take that learning and apply it to future decisions is the baseline of evolution in Artificial Intelligence. Developing this will lead to AI innovation that could turn society on its head, enhance how we live in the day to day exponentially and even save lives.
Wednesday, 16 June 2021
INTERNET OF THINGS – IOT
INTRODUCTION
The world today is an “Internet of Things”. Our planet has more connected devices than people. The Internet of Things (IoT) describes the network of physical objects-“things”- that are embedded with sensors, software and other technologies for the purpose of connecting and exchanging data with other devices and systems over the Internet.
IoT connects all potential objects to interact with each other on the Internet to provide secure, comfort life for human. These objects include network enabled devices such as traffic lights, smart appliances like Refrigerators, Microwave ovens, Washing Machines, dishwashers, thermostats; home security systems; computer peripherals like webcams and printers; wearable technology such as Apple Watches and Fitbits; Routers and smart speaker devices; GPS, heart monitoring Implants, MRI, biochips, etc.
HOW IoT WORKS:
An IoT system consists of sensors/devices which “talk” to the cloud through some kind of connectivity. Once the data gets to the cloud, software processes it and then might decide to perform an action, such as sending an alert or automatically adjusting the sensors/devices without the need for the user.
The IoT system involves web-enabled smart devices that use embedded systems, such as processors, sensors and communication hardware, to collect, send and act on data they acquire from their environments. IoT devices share the sensor data they collect by connecting to an IoT Gateway or other edge device where data is either sent to the cloud to be analyzed locally. Sometimes, these devices communicate with other related devices and act on the information they get from one another.
Radio frequency identification system :
RFID is an automatic technology and aids machines or computers to identify objects, record metadata or control individual target through radio waves. Connecting RFID reader to the terminal of Internet, the readers can identify, track and monitor the objects attached with tags globally, automatically, and in real time, if needed. This is the so-called Internet of Things (IoT). RFID is often seen as a prerequisite for the IoT.
Adding RFID tags to expensive pieces of
equipment to help track their location was one of the first IoT applications.
But since then, the cost of adding sensors and an internet connection to
objects has continued to fall, and experts predict that this basic
functionality could one day be cost effective, making it possible to connect
nearly everything to the internet.
STRUCTURE OF IOT:
The 4 stage IoT architecture consists of
1. Sensors and actuators
2. Internet getaways and Data Acquisition Systems
3. Edge IT
4. Data center and cloud.
Some of the common benefits of IOT enable businesses to:
• monitor their overall business processes;
• improve the customer experience;
• save time and money;
• enhance employee productivity;
• integrate and adapt business models;
• make better business decisions; and
• generate more revenue.
APPLICATIONS OF IOT:
There are numerous real-world applications of the internet of things, ranging from consumer IoT and enterprise IoT to manufacturing and industrial IoT (IIoT). IOT applications span numerous verticals, including automotive, telecom and energy, Agriculture, Health monitoring, etc.
DISADVANTAGES OF IoT:
• Security-As the number of connected devices increases and more information is shared between devices, the potential that a hacker could steal confidential information also increases.
• Privacy- Personal data is highly at the risk of getting leaked due to lack of weak privacy credentials.
CONCLUSION:
The IoT has the potential to dramatically increase the availability of information, and is likely to transform companies and organizations in virtually every industry around the world.
“This is just the beginning of
the growth of IoT; the future is yet to unfold”.
Thursday, 13 May 2021
CYBER
SECURITY
INTRODUCTION
The
internet has made the world smaller in many ways but it has also opened us up
to influences that have never before been so varied and so challenging. As fast as security grew, the hacking world
grew faster.
There
are two ways of looking at the issue of cyber security. One is that the
companies that provide cloud computing do that and only that so these companies
will be extremely well secured with the latest in cutting edge encryption
technology.
CYBER
SECURITY
It’s
being protected by internet-connected systems, including hardware, software and
data, from cyber-attacks. In a computing context, security comprises cyber
security and physical security both are used by enterprises to safe against
unauthorized access to data center and other computerized systems.
The
security, which is designed to maintain the confidentiality, integrity and
availability of data, is a subset of cyber security.
ADVANTAGES
OF CYBER SECURITY:
Cyber
Security provides protection against theft of data, protects computers from
theft, minimizing computer freezing, provides privacy for users, it offers
strict regulation, and it’s difficult to work with non-technical people.
v It
protects the personal and sensitive data of an individual and organizations
from being stolen.
v The
most important aspect is that it enhances the security of the system in
cyberspace.
v It
eliminates the risk of computers being hacked, thus mitigating the risk of
system freezes and crashes
v It
enhances overall security mechanisms of the business with the use of an
improved information framework and results in smooth business management
activities.
v It
protects the system against spyware, virus, malicious codes, Trojan horses,
worms, and several other unwanted infectious programs.
DISADVANTAGES
OF CYBER SECURITY:
Firewalls
can be difficult to configure correctly, faulty configured firewalls may
prohibit users from performing any behavior on the Internet before the Firewall
is properly installed, and you will continue to upgrade the latest software to
retain protection current, Cyber Protection can be expensive for ordinary
users.
v Besides,
cyber security would cost a significant number of users.
v Cyber
security can be a costly affair; as highly trained professionals are required.
v Latest
security patches must be updated regularly with the latest security
definitions, which is difficult to keep up with
EFFECTS
OF CYBER SECURITY:
v Your
company is impacted by cyber crime, A company’s lack of commitment to
information security can be very negative.
v The
direct economic impact of such attacks on companies, such as robbery of
corporate information, interruption of trade, or even the maintenance of
damaged systems, can contribute to loss of revenue.
IMPORTANCE
OF CYBER SECURITY:
Cyber
security is an important affair because of the amounts of data that is being
collected and stored over the internet and servers of the organizations.
Several significant militarizes, medical, government, corporate, industries and
financial organizations depend on the data stored on their servers
CYBER
SECURITY GOALS:
The
objective of Cyber security is to protect information from being stolen,
compromised or attacked.
Cyber
security can be measured by at least one of three goals-
Ø Protect
the confidentiality of data.
Ø Preserve
the integrity of data.
Ø Promote
the availability of data for authorized users.
WHY
WE NEED CYBER SECURITY?
The
range of operations of cyber security involves protecting information and
systems from major cyber threats. These threats take many forms. As a result,
keeping pace with cyber security strategy and operations can be a challenge,
particularly in government and enterprise networks where, in their most
innovative form, cyber threats often take aim at secret, political and military
assets of a nation, or its people. Some of the common threats are:
v Cyber
terrorism It is the innovative use of information technology by terrorist
groups to further their political agenda. It took the form of attacks on
networks, computer systems and telecommunication infrastructures.
MAINTENANCE OF EFFECTIVE CYBER SECURITY
Historically,
organizations and governments have taken a reactive, “point product” approach
to combating cyber threats, produce something together individual security
technologies – one on top of another to safe their networks and the valuable
data within them. Not only is this method expensive and complex, but news of
damaging cyber breaches continues to dominate headlines, rendering this method
ineffective
CYBER
SECURITY CAN PREVENT
The
use of cyber security can help prevent cyber-attacks, data breaches and
identity theft and can aid in risk management. When an organization has a
strong sense of network security and an effective incident response plan, it is
better able to prevent and serious
of these attacks. For example, end user
protection defends information and guards against loss or theft while also
scanning computers for malicious code.
TYPES
OF CYBER SECURITY THREATS:
v The use of keeping up with new technologies,
security trends and threat intelligence is a challenging their task. However,
it should be in order to protect information and other assets from cyber
threats, which take many forms.
v Ransom ware is a type of malware that involves
an attacker locking the victim's computer system files typically through
encryption and demanding a payment to decrypt and unlock them.
v Malware
is any file or program used to harm a computer user, such as worms, computer
viruses, Trojan horses and spyware.
THE
LEVEL OF CYBER RISK
There
are some additional reasons for that threat is overrated. First, as combating
cyber-threats has become a highly politicized issue, official statements about
the level of threat must also be seen in the context of different bureaucratic
entities that compete against each other for resources and influence. This is
usually done by stating an urgent need for action (which they should take) and
describing the overall threat as big and rising. Second, psychological research
has shown that risk perception is highly dependent on intuition and emotions,
as well as the perceptions of experts (Gregory and Mendelssohn 1993).
Cyber-risks, especially in their more extreme form, fit the risk profile of
so-called „dread risks‟, which appear
uncontrollable, catastrophic, fatal, and unknown.
REDUCING
CYBER – IN - SECURITY
The
three different debates have been taken over the many concepts and counter
measures have been produced with their focus.
The computer network which owns an entity has a common practice to take
a responsible for protecting it. However, there are some assets considered so
crucial in the private sector to the functioning of society and governments
have to take additional measures to ensure the level of protection. These
efforts are usually included under the label of critical (information).
Information assurance is guide for the infrastructure protection and to the
management of risk, which is essentially about accepting that one is (or
remains) insecure: the level of risk can never be reduced to zero.
CONCLUSION
Depending
on their (potential) severity, however, disruptive incidents in the future will
continue to fuel the military discourse, and with it fears of strategic
cyber-war. Certainly, thinking about (and planning for) worst-case scenarios is
a legitimate task of the national security apparatus. However, for the favor of
more plausible and more likely problems they should not to get more attention
Therefore, there is no way to study the „actual‟
level of cyber-risk in any sound way because it only exists in and through the
representations of various actors in the political domain.