Wednesday, 18 December 2019

Top 10 Trending Technologies

Change is the only constant. This applies in your professional life as well. Up-scaling yourself is
a need nowadays, the reason is pretty simple, and technology is evolving very quickly. Here
listed top 10 trending technologies, which is expected to acquire a huge market in 2020.



1. Artificial Intelligence
2. Block chain
3. Augmented Reality and Virtual Reality
4. Cognitive Cloud Computing
5. Angular and React
6. DevOps
7. Internet of Things (IoT)
8. Intelligent Apps (I – Apps)
9. Big Data
10. RPA (Robotic Process Automation)

Monday, 11 November 2019

FOG COMPUTING


Fog Computing
The term fog computing (or fogging) was coined by Cisco in 2014, so it is new for the general public. Fog and cloud computing are interconnected. In nature, fog is closer to the earth than clouds; in the technological world, it is just the same, fog is closer to end-users, bringing cloud capabilities down to the ground.
The definition may sound like this: fog is the extension of cloud computing that consists of multiple edge nodes directly connected to physical devices.


Such nodes are physically much closer to devices if compared to centralized data centers, which is why they are able to provide instant connections. The considerable processing power of edge nodes allows them to perform the computation of a great amount of data on their own, without sending it to distant servers.
Fog can also include cloudlets — small-scale and rather powerful data centers located at the edge of the network. Their purpose is to support resource-intensive IoT apps that require low latency.
The main difference between fog computing and cloud computing is that cloud is a centralized system, while the fog is a distributed decentralized infrastructure.
Fog computing is a mediator between hardware and remote servers. It regulates which information should be sent to the server and which can be processed locally. In this way, fog is an intelligent gateway that offloads clouds enabling more efficient data storage, processing and analysis.
One should note that fog networking is not a separate architecture and it doesn’t replace cloud computing but rather complements it, getting as close to the source of information as possible.
The new technology is likely to have the greatest impact on the development of IoT, embedded AI and 5G solutions, as they, like never before, demand agility and seamless connections.
Pros of Fog Computing
The fogging approach has many benefits for the Internet of Things, Big Data and real-time analytics. Here are the main advantages of fog computing over cloud computing:
·         Low latency (fog is geographically closer to users and is able to provide instant responses)
·         No problems with bandwidth (pieces of information are aggregated at different points instead of sending them together to one center via one channel)
·         Loss of connection is impossible (due to multiple interconnected channels)
·         High security (because data is processed by a huge number of nodes in a complex distributed system)
·         Improved user experience (instant responses and no downtimes satisfy users)
·         Power-efficiency (edge nodes run power-efficient protocols such as Bluetooth, Zigbee or Z-Wave)

Cons of Fog Computing
The technology doesn’t have any apparent disadvantages, but some shortcomings can be named:
·         A more complicated system (fog is an additional layer in the data processing and storage system)
·         Additional expenses (companies should buy edge devices: routers, hubs, gateways)
·         Limited scalability (fog is not as scalable as cloud)


Thursday, 10 October 2019

                                    GRID COMPUTING



Grid computing is a group of computers physically connected (over a network or with Internet) to perform a dedicated tasks together, such as analysing e-commerce data and solve a complex problem. Grids are a form of "super virtual computer" that solve a particular application.The grid size may vary from small to large enterprises network.


computing grid is constructed with the help of grid middleware software that allows them to communicate. middleware is used to translates one node information passed stored or processed information to another into a recognizable format. It is the form of "distributed computing" or "peer-to-peer computing".


'Grid computing' is distinguished from the cluster computing, because in Grid computing each node has heterogeneous and geographically dispersed (such as a WAN) and its own resource manager and perform a different task and are loosely connected by the Internet or low-speed networks, but in cluster computing resources are managed in a single location (Like a LAN). 




TYPES OF GRID:-
There are many types of grid like:-
1) COMPUTATIONAL GRID:- It acts as the resource of many computers in a network to a single problem at a time.
2) DATA GRID:- It deals with the controlled sharing and management of distributed data of large amount.
3) COLLABORATIVE GRID:- It is the grid which solves collaborative problems.
4) MANUSCRIPT GRID:- This grid works well when things are presenting in large continuous blocks of text or images.
5) MODULAR GRID:- This grid works well when columns alone don’t offer enough flexibility for complex problems.


GRID ACTIVITIES:-
1) Grid shares many different kinds of resources which is transparent to the end user.
2) It can solve many number of problems occurred in both science and industry.
 3)The grid development was done in the EU-funded

ADVANTAGES OF GRID COMPUTING:-
1) It can solve more complex problems in a very short span of time.
2) It can easily combine with other organisation.
3) It can make better use of existing hardware.

DISADVANTAGES OF GRID COMPUTING:-
1) It evolves grid software and standards.
2) It starts learning curve.
3) It is very non interactive.

Tuesday, 27 August 2019



Best open source software of 2019

LibreOffice 

There's no need to pay for Office with this open source alternative
LibreOffice is a full suite of office software, including excellent apps for text documents, spreadsheets, presentations, and databases. These are all fully compatible with the latest Microsoft file formats, so you’ll have no trouble sharing files that work with users of Word, Excel, PowerPoint, and Access. 
This means that document formatting is properly preserved for printing if you have to import/export files between LibreOffice and Microsoft Office, something not all office software platforms can do. However, it is a downloadable product rather than one you can work with in the cloud, unlike some others like Office 365 and G Suite.
Documents look just as sharp and professional as those created using paid-for software, and there are hundreds of templates available to download, use and edit.
LibreOffice’s huge community of contributors have compiled a brilliant collection of support materials, including a forum and even live chat if you need a hand. 


GIMP

Our favorite open source photo editor, packed with powerful tools
Powerful and flexible, open source image editor GIMP is as close to Adobe Photoshop as you can get without opening your wallet. It supports layers, and is packed with advanced tools for enhancing your pictures or creating new ones from scratch.
You can adjust every aspect of your pictures’ appearance manually, or use the dozens of customizable filters and effects to achieve amazing results with just a few clicks. GIMP comes with a huge array of user-created plugins pre-installed, and adding more is a piece of cake.
If you don’t need the power of GIMP and prefer a simpler interface, check out Paint.NET – another superb open source photo editor that’s a little lighter on features, but easier to master

 Shotcut

Great for new users, and an excellent substitute for Windows Movie Maker
If you’re looking for a great open source video editor, give Shotcut a whirl. It might look a little stark at first, but add some of the optional toolbars and you’ll soon have its most powerful and useful features your your fingertips.
Some of its best tools include quick filters for audio and video (which are non-destructive and can be layered to achieve different effects), advanced white balancing, wipes and other transitions, color grading, click-and-drag import, and straightforward trimming and compositing of clips.

Mozilla Thunderbird 

A free client that's an ideal replacement for the defunct Windows Live Mail
If you have multiple email accounts – even if they’re with the same provider – open source email client Mozilla Thunderbird will save you time and hassle flicking between browser tabs and logins. Like Firefox, Thunderbird is an open source project published by the Mozilla Foundation, and is almost infinitely adaptable.
Thunderbird's standard features include an RSS reader and the ability to link to files too large to send as attachments, and its optional extras include weather forecasts and Google app tabs

FileZilla


If you run your own website, the chances are you'll need FTP software to upload files directly to your server. While there are some good existing FTP clients out there, FileZilla is probably the best free version you can use.
It does all that you need to with a file upload client, which remains relatively simple anyway. On the left pane, FileZilla presents you with a view of your folder selection (from Windows Explorer, if using Windows) where you can ensure you select your folder of files to upload - on the right, the pane shows your location on the server, which will be a similar-looking file tree.
You just need to ensure you click through the folders on the right pane to the place where you want to upload your files, such as within the  Public_HTML folder on many Linux servers. Then it's simply a matter of using drag and drop to move your files to upload from the left pane and into the right pane.
Simple, easy, and usually very painless. The main stumbling block for most first-time users is not selecting the correct files to upload, or especially the correct locations.
If you need to CHMOD permissions for files, that's as easy as a right-click on any files or folders you need to apply them to, and that's about it.




Sunday, 28 July 2019

EDGE COMPUTING




               Edge Computing


Computing workloads are increasing across industry, from the manufacturing plant producing custom springs to the IoT television streaming Netflix. As the growth in network traffic increases, data center infrastructure, and networking costs have ballooned. The rise of enormous centralized data centers – or server farms – has rocketed companies like Amazon and Microsoft to the forefront of the technology sector. That growth comes at a cost, though, both to those behemoth companies and the SMBs who rely on Amazon Web Services (AWS) and Microsoft Azure cloud computing services. One solution? Edge computing.
Edge computing moves some of the computational needs away from the centralized point to the logical geographic nodes “at the edge” of the network, close to where the computing is needed. Edge computing increases the performance of applications and relieves increased bandwidth requirements from the core network. A recent report showed potential improved latency and data transfer reduction to the cloud of up to 95%.

Simply put: edge computing reduces data center costs by enabling more efficient use of cloud computing architecture.                                                                                                  
How Does Edge Computing Work?
 Edge computing is used by different people to do different things, and the way it works varies depending on its use. Most people think of edge computing and the Internet of Things (IoT). For every Office 365 email and Amazon Echo request, your device must resolve, compress, and transfer that information to the cloud, whereby it is received, decompressed, processed – possibly through another API – and then transferred back to you. And that takes time. We refer to the time that process takes as latency.
Edge computing enables data to be analyzed, processed, and transferred at the edge of the network. This distributed architecture is what makes IoT and mobile computing functional. The device you use, or a local server can process the data instead of sending it to a centralized data center, saving time and improving performance.
Why Edge Computing?

The edge computing reduces latency, provides near real-time data analysis, and reduces overall data traffic.
The long version: Everyone benefits. Whether you’re an oil tycoon analyzing real-time data uploads from your network of deep sea oil rigs or a hardcore Twitch Fortnite gamer streaming video of your best solo round ever, latency (see: lag) and delayed data transfer have real impacts. By processing data as close to the end user as possible, data computing and content delivery happens much more quickly.

Can Edge Computing Reduce Data Center Costs?

Edge computing delivers better bandwidth and more computing power. Backup and disaster recovery strategies, customer contact channels, and access to mission-critical applications are just as important to billion-dollar corporations as they are to a small healthcare provider in a second- or third-tier city. In the face of such technological conversations as net neutrality and micro-multinational business growth, the opportunity for businesses of any size to colocate in local, edge data centers is essential to the continued growth of our robust economy.



Sunday, 7 July 2019



NINE TECHNOLOGY TRENDS IN 2019


                     2019 is almost here and with it a flood of lists describing the trends that will define various fields in the new year. From among these predictions, those related to new technological standards stand out first and foremost, given that they will end up revolutionizing every industry, in an age when digital transformation plays a major role. After evaluating various consulting firm reports, we conclude that these are the nine major trends that will define technological disruption in the next 365 days.

1. 5G Networks

Spain’s National 5G Plan for 2018-2020 stipulates that throughout 2019, pilot projects based on 5G will be developed resulting in the release of the second digital dividend. Hence, the groundwork is being laid so that in 2020 we will be able to browse the Internet on a smartphone at a speed that will reach 10 gigabytes per second. Data from Statista, a provider of market and consumer data, indicates that by 2024, 5G mobile network technology will have reached more than 40 percent of the global population, with close to 1.5 billion users.
This trend has appeared in all the lineups for a few years now, but everything indicates that this year will be the year it takes off definitively. This is the year we’ll see its democratization, while it is even included in the political agenda. At the beginning of December, the European Commission released a communication on AI directing the member states to define a national strategy addressing this topic by mid-2019.
In respect to the previous point robots, drones, and autonomous vehicles are some of the innovations in the category the consulting firm Gartner labels “Autonomous Things” defined as the use of artificial intelligence to automate functions that were previously performed by people. This trend goes further than mere automation using rigid programming models, because AI is now being implemented to develop advanced behavior, interacting in a more natural way with the environment and its users.
Blockchain technology is another topic that frequently appears on these end of year lists. It has now broken free from an exclusive association with the world of cryptocurrencies; its usefulness has been proven in other areas. In 2019 we will witness many blockchain projects get off the ground as they try to address challenges that still face the technology in different fields like banking and insurance. It will also be a decisive year for the roll-out of decentralized organizations that work with intelligent contracts.
This trend represents another stride for big data, by combining it with artificial intelligence. Using machine learning (automated learning), it will transform the development, sharing, and consumption of data analysis. It is anticipated that the capabilities of augmented analytics will soon be commonly adopted not only to work with data, but also to implement  in-house business applications related to human resources, finance, sales, marketing and customer support – all with the aim to optimize decisions by using deep data analysis.
A digital twin is a virtual replica of a real-world system or entity. Gartner predicts that there will be more than 20 billion sensors connected to end points by 2020, but the consulting firm goes on to point out that there will also be digital twins for thousands upon thousands of these solutions, with the express purpose of monitoring their behavior. Initially, organizations will implement these replicas, which will continue to be developed over time, improving their ability to compile and visualize the right data, make improvements, and respond effectively to business objectives.
Edge computing is a trend that relates most specifically to the Internet of  Things. It consists in placing intermediate points between connected objects. Data can be processed at these intermediate points, thus facilitating tasks that can be performed closer to where the data has been received, thus reducing traffic and latency when responses are sent. With this approach, processing is kept closer to the end point rather than having the data sent to a centralized server in the cloud. Still, instead of creating a totally new architecture, cloud computing and edge computing will be developed as complementary models with solutions in the cloud, administered as a centralized service that runs not only on centralized servers but also on distributed servers and in the edge devices themselves.
Chatbots integrated into different chat and voice assistance platforms are changing the way people interact with the digital world, just like virtual reality (VR), augmented reality (AR), and mixed reality (MR). The combination of these technologies will dramatically change our perception of the world that surrounds us by creating smart spaces where more immersive, interactive, and automated experiences can occur for a specific group of people or for defined industry cases. 
Digital ethics and privacy are topics that are receiving more and more attention from both private individuals as well as associations and government organizations. For good reason, people are increasingly concerned about how their personal data is being used by public and private sector organizations. Therefore, we conclude that the winning organizations will be those that proactively address these concerns and are able to earn their customers’ trust.

Thursday, 27 June 2019

SOFTWARE ENGINEERING





Software Engineering


                     Software engineering is an engineering branch associated with development of software product using well-defined scientific principles, methods and procedures. The outcome of software engineering is an efficient and reliable software product.
Software project management has wider scope than software engineering process as it involves communication, pre and post delivery support etc.
Software is more than just a program code. A program is an executable code, which serves some computational purpose. Software is considered to be collection of executable programming code, associated libraries and documentations. Software, when made for a specific requirement is called software product.
                        Engineering on the other hand, is all about developing products, using well-defined, scientific principles and methods.


Software engineering is an engineering branch associated with development of software product using well-defined scientific principles, methods and procedures. The outcome of software engineering is an efficient and reliable software product.

Definitions

IEEE defines software engineering as:
(1) The application of a systematic,disciplined,quantifiable approach to the development,operation and maintenance of software; that is, the application of engineering to software.
(2) The study of approaches as in the above statement.
Fritz Bauer, a German computer scientist, defines software engineering as:
Software engineering is the establishment and use of sound engineering principles in order to obtain economically software that is reliable and work efficiently on real machines.

Software Evolution

The process of developing a software product using software engineering principles and methods is referred to as software evolution. This includes the initial development of software and its maintenance and updates, till desired software product is developed, which satisfies the expected requirements.


Evolution starts from the requirement gathering process. After which developers create a prototype of the intended software and show it to the users to get their feedback at the early stage of software product development. The users suggest changes, on which several consecutive updates and maintenance keep on changing too. This process changes to the original software, till the desired software is accomplished.
Even after the user has desired software in hand, the advancing technology and the changing requirements force the software product to change accordingly. Re-creating software from scratch and to go one-on-one with requirement is not feasible. The only feasible and economical solution is to update the existing software so that it matches the latest requirements.


Need of Software Engineering

The need of software engineering arises because of higher rate of change in user requirements and environment on which the software is working.
  • Large software - It is easier to build a wall than to a house or building, likewise, as the size of software become large engineering has to step to give it a scientific process.
  • Scalability- If the software process were not based on scientific and engineering concepts, it would be easier to re-create new software than to scale an existing one.
  • Cost- As hardware industry has shown its skills and huge manufacturing has lower down he price of computer and electronic hardware. But the cost of software remains high if proper process is not adapted.
  • Dynamic Nature- The always growing and adapting nature of software hugely depends upon the environment in which user works. If the nature of software is always changing, new enhancements need to be done in the existing one. This is where software engineering plays a good role.
  • Quality Management- Better process of software development provides better and quality software product.

Characteristics of good software

A software product can be judged by what it offers and how well it can be used. This software must satisfy on the following grounds:
  • Operational
  • Transitional
  • Maintenance
Well-engineered and crafted software is expected to have the following characteristics:

Operational

This tells us how well software works in operations. It can be measured on:
  • Budget
  • Usability
  • Efficiency
  • Correctness
  • Functionality
  • Dependability
  • Security
  • Safety

Transitional

This aspect is important when the software is moved from one platform to another:
  • Portability
  • Interoperability
  • Reusability
  • Adaptability

Maintenance

This aspect briefs about how well a software has the capabilities to maintain itself in the ever-changing environment:
  • Modularity
  • Maintainability
  • Flexibility
  • Scalability
In short, Software engineering is a branch of computer science, which uses well-defined engineering concepts required to produce efficient, durable, scalable, in-budget and on-time software products.




                                                                                                By
                                                                                                K.Deepa


                                                                                                Dept of Computer Applications