Blog

Alpha composting is the process of combining an image with another image as its background. This creates the appearance of full or partial transparency of the resulting image. It is mostly used in 2D graphics. Composting is also used to combine images and live footage. This is a very useful process as it is widely used in many fields. It is a simple and vital process. To combine these images in an effective manner, it is essential to keep the matte of each element. This thing contains information regarding each element. The information is regarding the shape of each element. Alpha channel is the concept designed to store information.

Additional information is stored regarding each pixel in the alpha channel with a value between zero and one. The technique is used in many applications. You will find it in many leading operating systems like Android, Mac OS, Plan 9 and many more. It also has support in many graphical user interfaces (GUI) and widgets. Although there are many other transparency methods, Alpha composting is the most used due to its easy way of usage and amazing features. The end result is worth taking notice and better than that found in any other method.

http://www.collegelib.com/t-alpha-composting-computer-technology-seminar-abstract-report.html

Monday, 09 January 2017 06:26

The purpose of JavaScript

Written by

Introduction

Now the Web Standards Curriculum has taken you through the core essential concepts of programming, it is time to take a step back from the details and take a high-level look at what you can actually do with JavaScript — why would you want to take the time to learn such a complicated subject, and use it on your web pages?

This is an interesting time, as the usage of JavaScript has moved away from a fringe knowledge matter to a mainstream web development skill over the last few years. Right now, it is difficult to get a job as a web developer without JavaScript skills.

How people came to like JavaScript

Computers used to be much slower and browsers were bad at interpreting JavaScript. Most developers came from a back-end development world. Back then, JavaScript just seemed like a bad idea.

On the other hand, the cost of hosting files was very high. This is where JavaScript came in: JavaScript is executed on users' computers when they access the page, meaning that anything you can do in JavaScript will not add processing strain onto your server. Hence, it is client-side. This made sites much more responsive for the end user and less expensive in terms of server traffic.

Skip forward to today -- modern browsers have well-implemented JavaScript, computers are much faster, and bandwidth is a lot cheaper, so a lot of the negatives are less critical. However, cutting down on server round-trips by doing things in JavaScript still results in more responsive web applications and a better user experience.

The downside of JavaScript

Even with all these improvements, there is still a catch: JavaScript is flaky. Not the language itself but the environment it is implemented in. You do not know what computer is on the receiving end of your web page, you do not know how busy the computer is with other things, and you do not know if some other JavaScript in another tab of the browser is grinding things down to a halt. Until browsers begin having different processing resources for different tabs and windows (also known as threads), this will always remain an issue. Multiple threading is made available to a certain degree by a new HTML5 feature called Web workers, and this has reasonable browser support.

In addition, JavaScript is frequently turned off in browsers because of security concerns, or because JavaScript is often used to annoy people rather than to improve their experience. For example, a lot of sites still try to pop-up new windows against your wishes, or cover the content with advertising until you click a link to get rid of it.

What JavaScript can do for you

Let’s take a step back and count the merits of JavaScript:

  • JavaScript is very easy to implement. All you need to do is put your code in the HTML document and tell the browser that it is JavaScript.
  • JavaScript works on web users’ computers — even when they are offline!
  • JavaScript allows you to create highly responsive interfaces that improve the user experience and provide dynamic functionality, without having to wait for the server to react and show another page.
  • JavaScript can load content into the document if and when the user needs it, without reloading the entire page — this is commonly referred to as Ajax.
  • JavaScript can test for what is possible in your browser and react accordingly — this is called Principles of unobtrusive JavaScript or sometimes defensive scripting.
  • JavaScript can help fix browser problems or patch holes in browser support — for example fixing CSS layout issues in certain browsers.

That is a lot for a language that until recently was laughed at by programmers favouring “higher programming languages”. One part of the renaissance of JavaScript is that we are building more and more complex web applications these days, and high interactivity either requires Flash (or other plugins) or scripting. JavaScript is arguably the best way to go, as it is a web standard, it is supported natively across browsers (more or less — some things differ across browsers, and these differences are discussed in appropriate places in the articles that follow this one), and it is compatible with other open web standards.

Common uses of JavaScript

The usage of JavaScript has changed over the years we have been using it. At first, JavaScript interaction with the site was mostly limited to interacting with forms, giving feedback to the user, and detecting when they do certain things. We used alert() to notify the user of something, confirm() to ask if something is OK to do and either prompt() or a form field to get user input.

This led mostly to validation scripts that stopped the user to send a form to the server when there was a mistake, and simple converters and calculators. In addition, we managed to build highly useless things like prompts asking the user for their name just to print it out immediately afterwards.

Another thing we used was document.write() to add content to the document. We also worked with pop-up windows and frames and lost many a nerve and pulled out hair trying to make them talk to each other. Thinking about most of these technologies should make any developer rock forward and backward and curl up into a fetal position stammering “make them go away”, so let's not dwell on these things — there are better ways to use JavaScript!

Enter DOM scripting

When browsers started supporting and implementing the Document Object Model (DOM), which allows us to have much richer interaction with web pages, JavaScript started to get more interesting.

The DOM is an object representation of the document. For example, the previous paragraph (check out its source using view source) in DOM-speak is an element node with a nodeName of p. It contains three child nodes:

• a text node containing "When browsers started supporting and implementing the " as its nodeValue;

• an element node with a nodeName of a;

• another text node with a nodeValue of ", which allows us to have much richer interaction with web pages, JavaScript started to get more interesting.".

The a child node also has an attribute node called href with a value of "http://www.w3.org/DOM/" and a child node that is a text node with a nodeValue of "Document Object Model(DOM)".

In human words, you can say that the DOM explains both the types, the values, and the hierarchy of everything in the document — you do not need to know anything more for now.

Using the DOM you can:

  • Access any element in the document and manipulate its look, content, and attributes.
  • Create new elements and content and apply them to the document when and if they are needed.

This means that we do not have to rely on windows, frames, forms, and ugly alerts any longer, and can give feedback to the user in the document in a nicely styled manner

Together with event handling, this is a very powerful arsenal to create interactive and beautiful interfaces.

Event handling means that our code reacts to things that happen in the browser. This could be things that happen automatically — like the page finishing loading — but most of the time we react to what the user did to the browser.

Users might resize the window, scroll the page, press certain keys, or click on links/buttons/elements using the mouse. With event handling, we can wait for these things to happen and tell the web page to respond to these actions as we wish. Whereas in the past, clicking any link would take the site visitor to another document, we can now hijack this functionality and do something else like showing and hiding a panel or taking the information in the link and using it to connect to a web service.

Other modern uses of JavaScript

And this is basically what we are doing these days with JavaScript. We enhance the old, tried and true web interface — clicking links, entering information and sending off forms, etc. — to be more responsive to the end user.

For example:

  • A sign-up form can check if your user name is available when you enter it, preventing you from having to endure a frustrating reload of the page.
  • A search box can give you suggested results while you type, based on what has been entered so far (for example “bi” could bring up suggestions to choose from that contain this string, such as “bird”, “big”, and “bicycle”). This usage pattern is called autocomplete.
  • Information that changes constantly can be loaded periodically without the need for user interaction, for example sports match results or stock market tickers.
  • Information that is a nice-to-have and runs the risk of being redundant to some users can be loaded when and if the user chooses to access it. For example the navigation menu of a site could be 6 links but display links to deeper pages on-demand when the user activates a menu item.
  • JavaScript can fix layout issues. Using JavaScript, you can find the position and area of any element on the page, and the dimensions of the browser window. Using this information you can prevent overlapping elements and other such issues. Say for example you have a menu with several levels; by checking that there is space for the sub-menu to appear before showing it, you can prevent scroll-bars or overlapping menu items.
  • JavaScript can enhance the interfaces HTML gives us. While it is nice to have a text input box you might want to have a combo box allowing you to choose from a list of preset values or enter your own. Using JavaScript, you can enhance a normal input box to do that.
  • You can use JavaScript to animate elements on a page — for example to show and hide information, or highlight specific sections of a page — this can make for a more usable, richer user experience.

 Using JavaScript sensibly and responsibly

There is not much you cannot do with JavaScript — especially when you mix it with other technologies like Canvas or SVG. However, with great power comes great responsibility, and you should always remember the following when using JavaScript:

  • JavaScript might not be available — this is easy to test for, so not really a problem. However, things that depend on JavaScript should be created with this in mind, and you should be careful that your site does not break (i.e. essential functionality is not available) if JavaScript is not available.
  • If the use of JavaScript does not aid the user in reaching a goal more quickly and efficiently you are probably using it wrong.
  • Using JavaScript, we often break conventions that people have got used to over years of using the web (for example, clicking links to go to other pages, or a little basket icon meaning “shopping cart”). Whilst these usage patterns might be outdated and inefficient, changing them still means making users change their ways — and this makes humans feel uneasy. We like being in control and once we understand something, it is hard for us to deal with change. Your JavaScript solutions should feel naturally better than the previous interaction, but not so different that the user cannot relate to it via their previous experience. If you manage to get a site visitor saying “ah ha — this means I do not have to wait” or “Cool — now I do not have to take this extra annoying step” — you have got yourself a great use for JavaScript.
  • JavaScript should never be a security measure. If you need to prevent users from accessing data or you are likely to handle sensitive data, then do not rely on JavaScript. Any JavaScript protection can easily be reverse-engineered and overcome, as all the code is available to read on the client machine. Also, users can just turn JavaScript off in their browsers.

Conclusion

JavaScript is a wonderful technology to use on the web. It is not that hard to learn and it is very versatile. It plays nicely with other web technologies — such as HTML and CSS — and can even interact with plugins such as Flash. JavaScript allows us to build highly responsive user interfaces, prevent frustrating page reloads, and even fix support issues for CSS. Using the right browser add-ons (such as Google Gears or Yahoo Browser Plus) you can even use JavaScript to make online systems available offline and sync automatically once the computer goes online.

JavaScript is also not restricted to browsers. The speed and small memory footprint of JavaScript in comparison to other languages brings up more and more uses for it — from automating repetitive tasks in programs like Illustrator, up to using it as a server-side language with a standalone parser. The future is wide open.

Source:- https://docs.webplatform.org/wiki/concepts/programming/the_purpose_of_javascript

PHP development techniques to improve the quality of your programming.

Selection of exact caching technique

Many caching techniques are there to be applied on various situations. The simplest and globally accepted caching trick is to use Opcode caches. This idea works for eliminating much of the redundant work on every level of the process. Similarly, Memcached is a good idea that stores data in central server and works better for those applications which are scattered in various servers. Other simple caching techniques involve serialization of data and it’s storage on temporary files. This too works well for optimization of codes. On the basis of the demand of a project, you should apply the best suitable caching technique.

Utilize the unit testing process

In order to maintain codebase and avoid code rot, nothing can be better idea than unit testing. A stepwise process for unit testing for writing codes and then refactoring of code should be carried out following the final testing at last. This will assure you to check whether the new code works better or not. FirePHP and XdeBug are also good tools to work well for this purpose. Cleaning logs and tracking error reports are good ideas for better functioning of codes.

Contact with other programmers

Although, it is a common and conventional idea but definitely this works in an awesome manner. Going through various programming blogs, social media networks, YouTube and websites offering information about PHP development will help you improve your skills. I never miss any conference regarding this matter in my community and participate in bug reports and code reviews. Apart from this I always try contributing my work to open source project for enhancing my skills. I experiences that this is the kind of way through which I’m learning more.

A regular programmer can think that it knows a lot about PHP development, even I did but actually it’s not a fact. The rapidly changing scenario in the techno world and day by day evolving techniques in PHP programming has given rise to lots of ultimate things that we yet don’t know. There is always a room for perfection even to expert programmers and they can improve their skills more by learning the trick. .tracking news about the latest releases, coding based programming and insider knowledge, one can be assured of knowing much about its field. Your first objective should be to develop clear and far optimized codes.

Source:-http://www.7eyetechnologies.com/blog/three-latest-php-techniques-to-improve-class-programming/

Wednesday, 04 January 2017 04:46

Sixth Sense Technology

Written by

Abstract

Sixth Sense Technology integrates digital information into the physical world and its objects, making the entire world your computer. It can turn any surface into a touch-screen for computing, controlled by simple hand gestures. It is not a technology which is aimed at changing human habits but causing computers and other machines to adapt to human needs. It also supports multi user and multi touch provisions.

Sixth Sense device is a mini-projector coupled with a camera and a cell phone-which acts as the computer and your connection to the Cloud, all the information stored on the web. The current prototype costs around $350. The Sixth Sense prototype is used to implement several applications that have shown the usefulness, viability and flexibility of the system

Introduction of Sixth Sense Technology

'Sixth Sense' is a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information the hardware components are coupled in a pendant like mobile wearable device. The Sixth Sense prototype is comprised of a pocket projector, a mirror,colored marker and a camera. The camera, mirror and projector is connected wirelessly to a blue tooth smart phone device that can easily fit into the user's pocket.A software then processes the data that is collected by the capturing device and produces analysis.The software that is used in sixth sense device is open source type

Gesture Recognition

It is a technology which is aimed at interpreting human gestures with the help of mathematical algorithms. Gesture recognition technique basically focuses on the emotion recognition from the face and hand gesture recognition. Gender recognition technique enables humans to interact with computers in a more direct way without using any external interfacing devices. It can provide a much better alternative to text user interfaces and graphical user interface which requires the need of a keyboard or mouse to interact with the computer.

Interfaces which solely depends on the gestures requires precise hand pose tracking. In the early versions of gesture recognition process special type of hand gloves which provide information about hand position orientation and flux of the fingers. In the SixthSense devices colored bands are used for this purpose. Once hand pose has been captured the gestures can be recognized using different technique's. Neural network approaches or statistical templates are the commonly used techniques used for the recognition purposes. This technique have an high accuracy usually showing accuracy of more than 95%. Time dependent neural network will also be used for real time recognition of the gestures

 APPLICATION

The sixth sense technology finds a lot of application in the modern world. The sixth sense devices bridge the gap by bringing the digital world into the real world and in that process allowing the users to interact with the information without the help of any machine interfaces. Prototypes of the sixth sense device have demonstrated viability, usefulness and flexibility of this new technology. According to the words of its developers the extend of use of this new device is only limited by the imagination of human beings.

The sixth sense recognizes the objects around us and displays the information relating to those objects in a real time environment. The sixth sense technology allows the user to interact the information through hand gestures. This is a quiet efficient way compared to the text and graphic based user interface. It has the potential to form the transparent user interface for accessing the information around us

Source:-http://www.seminarsonly.com/computer%20science/Sixth-Sense-Technology.php

Thursday, 29 December 2016 05:02

History of Computer Memory

Written by

Drum memory, an early form of computer memory that actualy did use a drum as a working part with data loaded to the drum. The drum was a metal cylinder coated with recordable ferromagnetic material. The drum also had a row of read-write heads that wrote and then read the recorded data.

Magnetic core memory (ferrite-core memory) is another early form of computer memory. Magnetic ceramic rings called cores, stored information using the polarity of a magnetic field.

Semiconductor memory is computer memory we are all familiar with, computer memory on a intergrated circuit or chip. Referered to as random-access memory or RAM, it allowed data to be accessed randomly, not just in the sequence it was recorded.

Dynamic random access memory (DRAM) is the most common kind of random access memory (RAM) for personal computers. The data the DRAM chip holds has to be periodicaly refreshed. Static random access memory or SRAM doesn't need to be refreshed.

Timeline of Computer Memory

1834

Charles Babbage begins build his "Analytical Engine", precursor to the computer. It uses read-only memory in the form of punch cards.

1932

Gustav Tauschek invents drum memory in Austria. 1936 Konrad Zuse applies for a patent for his mechanical memory to be used in his computer. This computer memory is based on sliding metal parts.

1939

Helmut Schreyer invents a prototype memory using neon lamps. 1942 The Atanasoff-Berry Computer has 60 50-bit words of memory in the form of capacitors mounted on two revolving drums. For secondary memory it uses punch cards.

1947

Frederick Viehe of Los Angeles, applies for a patent for an invention that uses magnetic core memory. Magnetic drum memory is independently invented by several people.

  • An Wang :-An Wang invented the magnetic pulse controlling device, the principle upon which magnetic core memory is based.
  • Kenneth Olsen:- Kenneth Olsen invented vital computer components, best known for "Magnetic Core Memory" Patent No. 3,161,861 and as being the cofounder of Digital Equipment Corporation.
  • Jay Forrester:- Jay Forrester was a pioneer in early digital computer development and invented random-access, coincident-current magnetic storage.

1949

Jay Forrester conceives the idea of magnetic core memory as it is to become commonly used, with a grid of wires used to address the cores. The first practical form manifests in 1952-53 and renders obsolete previous types of computer memory. 1950 Ferranti Ltd. completes the first commercial computer with 256 40-bit words of main memory and 16K words of drum memory. Only eight were sold.

1951

Jay Forrester files a patent for matrix core memory. 1952 The EDVAC computer is completed with 1024 44-bit words of ultrasonic memory. A core memory module is added to the ENIAC computer. 1955 An Wang was issued U.S. patent #2,708,722 with 34 claims for magnetic memory core.

1966

Hewlett-Packard releases their HP2116A real-time computer with 8K of memory. The newly formed Intel starts sell a semiconductor chip with 2,000 bits of memory.

1968

USPTO grants patent 3,387,286 to IBM's Robert Dennard for a one-transistor DRAM cell. DRAM stands for Dynamic RAM (Random Access Memory) or Dynamic Random Access Memory. DRAM will become the standard memory chip for personal computers replacing magnetic core memory.

1969

Intel begin as chip designers and produce a 1 KB RAM chip, the largest memory chip todate. Intel soon switch to being notable designers of computer microprocessors.

1970

Intel releases the 1103 chip, the first generally available DRAM memory chip.

1971

Intel releases the 1101 chip, a 256-bit programmable memory, and the 1701 chip, a 256-byte erasable read-only memory (EROM). 1974 Intel receives a U.S. patent for a "memory system for a multichip digital computer".

1975

Personal consumer computer Altair released, it uses Intel's 8-bit 8080 processor and includes 1 KB of memory. Later in the same year, Bob Marsh manufacturers the first Processor Technology's 4 kB memory boards for the Altair.

1984

Apple Computers releases the Macintosh personal compututer. It is the first computer that came with 128KB of memory. The one-megabyte memory chip is developed.

Source:-http://inventors.about.com/od/rstartinventions/a/Ram.htm

Friday, 23 December 2016 02:40

A brief history of web design for designers

Written by

Can you imagine what the very first website looked like? It was nothing like what we have today. No images. No CSS. No parallax design.

Though there's much more we can do with web design today, it's fun to take a look back at where we came from. In the infographic below, AmeriCommerce takes us through the exciting history of web design from 1990 to present. Take a look at the design trends, browsers, and monitor resolutions that were prevalent at different times over the last 25 years.

The dark ages of web design (1989)

The very beginning of web design was pretty dark, as screens were literally black and only few monochrome pixels lived therein. Design was made by symbols and tabulation (Tab key). So let's fast forward to the time when Graphic User Interface was the main way of surfing the web – the Wild West era of tables.

Tables – The beginning (1995)

The birth of browsers that could display images was the first step into web design as we know it. The closest option available to structure information was the concept of tables already existing in HTML. So putting tables within tables, figuring out clever ways to mix static cells with fluid cells was the thing, started by David's Siegel's book Creating Killer Sites. Though it didn’t feel totally right since the main purpose of a table is to structure numbers, it was still the common method to design the web for quite some time. The other problem was the difficulty to maintain these fragile structures. This is also the time when the term slicing designs became popular. Designers would make a fancy layout, but it was up to developers to break it into small pieces and figure out the best way to make that design work. On the other hand, tables had some awesome features like the ability to align things vertically, be defined in pixels or in percentages. The main benefit was that it was the closest to a grid we could get back then. It was also the time when so many developers decided not to like front-end coding.

JavaScript comes to the rescue (1995)

JavaScript was the answer to the limitations of HTML. For instance, need a popup window, want to dynamically modify the order of something? The answer was JavaScript. However, the main problem here is that JavaScript is layered on top of the fabric that makes the web work and has to be loaded separately. Very often it is used as an easy patch for a lazy developer, yet it is very powerful if used wisely. Nowadays we prefer to avoid JavaScript if the same feature can be delivered using CSS. Yet JavaScript itself stays strong as in front-end (jQuery) as on the back-end (Node.js).

The golden era of freedom – Flash (1996)

To break the limitations of existing web design, a technology was born that promised a freedom never seen before. The designer could design any shapes, layouts, animations, interactions, use any font and all this in one tool – Flash. The end-result is packed as one file and then sent to the browser to be displayed. That is, as long as a user had the latest flash plugin and some free time to wait until it loads, it worked like magic. This was the golden era for splash pages, intro animations, all kinds of interactive effects. Unfortunately, it wasn’t too open or search-friendly and certainly consumed a lot of processing power. When Apple decided to abandon it on their first iPhone (2007), Flash started to decay. At least for web design.

CSS (1998)

Around the same time as Flash, a better approach to structuring design from a technical standpoint was born – Cascading Style Sheets (CSS). The basic concept here is to separate content from the presentation. So the look and formatting are defined in CSS, but the content in HTML. The first versions of CSS were far from flexible, but the biggest problem was the adoption rate by browsers. It took a few years before browsers started to fully support it and often it was quite buggy. This is also the time when one browser had the newest feature, while another was lacking it, which is a nightmare for a developer. To be clear, CSS isn’t a coding language, it is rather a declarative language. Should web designers learn how to code? Maybe. Should they understand how CSS works? Absolutely!

Mobile uprising – Grids and frameworks (2007)

Browsing the web on mobile phones was a whole challenge in itself. Besides all the different layouts for devices, it introduced content-parity problems – should the design be the same on the tiny screen or should it be stripped down? Where to put all the nice, blinking ads on that tiny screen? Speed was also an issue, as loading a lot of content burns your internet money pretty fast. The first step to improvement was an idea of column grids. After a few iterations, the 960 grid system won, and the 12-column division became something designers were using every day. The next step was standardising the commonly used elements like forms, navigation, buttons, and to pack them in an easy, reusable way. Basically, making a library of visual elements that contains all the code in it. The winners here are Bootstrap and Foundation, which is also related to the fact that line between a website and an app is fading. The downside is that designs often look the same and designers still can’t access them without knowing how the code works.

Responsive web design (2010)

A brilliant guy named Ethan Marcotte decided to challenge the existing approach by proposing to use the same content, but different layouts for the design, and coined the term Responsive web design. Technically we still use HTML and CSS, so it is rather a conceptual advancement. Yet there are lot of misconceptions here. For a designer, responsive means mocking up multiple layouts. For the client, it means it works on the phone. For a developer, it is the way how images are served, download speeds, semantics, mobile/desktop first and more. The main benefit here is the content parity, meaning that it's the same website that works everywhere. Hope we can agree on that, at the least.

The times of the flat (2010)

Designing more layouts takes more time, so luckily we decided to streamline the process by ditching fancy shadow effects and getting back to the roots of design by prioritising the content. Fine photography, typography, sharp illustrations and thoughtful layouts is how we design now. Simplifying visual elements or so called Flat design is also part of the process. The main benefit here is that much more thought is being put into copy, into hierarchy of the message and content in general. Glossy buttons are replaced by icons and that allows us to use vector images and icon fonts. Web fonts deliver beautiful typography. The funny thing is, the web was close to this from the very beginning. But well, that’s what the young years are for.

The bright future (2014)

The holy grail of web design has been to actually make it visual and bring it into the browser. Imagine that designers simply move things around the screen and a clean code comes out! And I don't mean changing the order of things, but having full flexibility and control! Imagine that developers don't have to worry about browser compatibility and can focus on actual problem solving!

Technically there are a few new concepts that support the move into that direction. New units in CSS like vh, vw (viewport height and width) allow much greater flexibility to position elements. It will also solve the problem that has puzzled so many designers – why centering something vertically in CSS is such a pain. Flexbox is another cool concept which is a part of CSS. It allows to create layouts and modify them with a single property instead of writing lot of code. And finally web components is an even bigger take. It is a set of elements bundled together, i.e. a gallery, signup form etc. That introduces an easier workflow, where elements become building blocks that can be reused and updated separately.

Source:-http://blog.froont.com/brief-history-of-web-design-for-designers/

A database management system is important because it manages data efficiently and allows users to perform multiple tasks with ease. A database management system stores, organizes and manages a large amount of information within a single software application. Use of this system increases efficiency of business operations and reduces overall costs.

Database management systems are important to businesses and organizations because they provide a highly efficient method for handling multiple types of data. Some of the data that are easily managed with this type of system include: employee records, student information, payroll, accounting, project management, inventory and library books. These systems are built to be extremely versatile.

Without database management, tasks have to be done manually and take more time. Data can be categorized and structured to suit the needs of the company or organization. Data is entered into the system and accessed on a routine basis by assigned users. Each user may have an assigned password to gain access to their part of the system. Multiple users can use the system at the same time in different ways.

For example, a company's human resources department uses the database to manage employee records, distribute legal information to employees and create updated hiring reports. A manufacturer might use this type of system to keep track of production, inventory and distribution. In both scenarios, the database management system operates to create a smoother and more organized working environment.

A simple database has a single table with rows for the data and columns that define the data elements. For an address book, the table columns define data elements such as name, address, city, state and phone number, while a table row, or record, contains data for each person in the book. The query language provides a way to find specific types of data in each record and return results that match the criteria. These results display in a form that uses the defined data elements but only shows records that meet the criteria. These three components make up almost every type of database.

Relational databases use multiple tables and define relationships between them using a schema in addition to data elements. Records and data elements from each table merge, based on the query, and display in the form. Routinely used queries often become reports. A report uses the same query but reports on changes in data over time.

There are five major components in a database environment: data, hardware, software, people and procedures. The data is a collection of facts, typically related. The hardware is the physical devices in the database environment. Operating systems, database management systems and applications make up the software. Examples of people in the database environment are the system administrator, programmers and end users. Procedures are the instructions and rules for the database.

Source:-https://www.linkedin.com/pulse/what-importance-database-management-system-scott-aveda

With the staggering growth of mobile devices like smart phones and tablets, and mobile device usage, via games, apps, social media, and websites, it is now essential that your business website is mobile friendly, so that your clients and prospects can easily browse and find information, regardless of what type of device (phone, tablet, desktop, etc.) they are using. 

1. mobile usage is on the rise

Currently, more than 58% of American adults own a smartphone and almost 60% of all website traffic is from mobile devices. In fact, there are currently more mobile devices on earth then their are people. And every month mobile usage continues to grow, so every month more and more prospects and customers will view your website from a mobile device. If their experience viewing and interacting with your site is poor, they'll likely have a lower option of your brand, and they'll also be more likely to visit a competitor's site.

2. shopping on mobile devices is steadily growing

Online shopping is easier than hopping in the car and driving to the store and it is even easier if you can do it in your favorite chair, while watching TV. 80% of consumers regularly use their smartphones to shop online. And 70% of shoppers now use mobile phones while in stores during the holidays. If your products and services aren't easy to view from a phone, you're missing out on an opportunity.

3. social media increases mobile visitors

Over 55% of social media consumption now happens on mobile devices, so sharing links from social media sites like, Facebook, YouTube, Twitter, or Google Plus to your website will mean even more traffic and viewing of your website from mobile devices. So if you have a social marketing strategy and want to leverage social sharing of content, get responsive.

4. responsive sites improve seo rankings

Responsive development is Google's recommended approach for mobile web design. Per Google, responsive websites will perform better in search rankings because they provide a better user experience than sites that are not mobile friendly. Additionally, Google likes that responsive sites use single URLs vs. different URLs for seperate mobile versions of websites.

Furthermore, mobile phones now have a seperate Google search algorithm as well, so just because your site ranks high via a desktop search doesn't mean it will continue to rank well for individuals for perform the same search via their phone. This issue becomes even more critical when you consider that mobile searches will overtake desktop searches in 2015! If you think search optimization (SEO)is important, than your site better be responsive.

5. responsive designs adapt to multiple devices sizes

Want your web design to look great, no matter the device or screen size? Then responsive web design is the way to go. But don't just think about today with smartphones and tablets. Think about tomorrow with smart watches, and Google Glass, and whatever new devices pops up for internet viewing. Responsive web design and development will work for them too.

6. one site is easier to manage and increase r.o.i.

There are currently many organizations that actually have two websites: 1) their main site and 2) a second mobile version of their site. This was a fairly common practice before responsive development became the preferred method. That meant mutiple versions to manage and update - inefficiency! With a responsive site, your site will adapt to each device, providing the relevant layout and content that best meets the users' needs. It also means that your business will only have one site to manage, meaning you'll only have to update content one time, regardless of how different people consume your content. That also means lower web content management costs and higher R.O.I.

7. reponsive sites provide a better user experience

There are plenty of business reasons to implement a responsive website, but they all connect back to the goal of providing a better user experience for your audience. A responsive site means no more pinching and zooming, and no more side scrolling, to see an entire site that doesn't fit on a mobile screen. And a better user experience reduces bounce rates, boosts website conversions and improves brand perception.

8. a better bathroom experience

Finally, the most disturbing stat about the growth of mobile usage: 75% of Americans bring their phones to the bathroom! That's certainly gross and it may also be an indicator of the downfall of mankind, but it is true. And if people are going to browse from the restroom, you can at least provide them with a positive user experience. go responsive!

Source: http://www.marketpath.com/digital-marketing-insights/8-reasons-to-have-a-responsive-web-design-infographic

Thursday, 08 December 2016 04:57

Open Source Strategies for Software Developers

Written by

Introduction

Analysts tout 2005 as the year of open source. Its use has gained increased media attention, and many software consulting companies are starting to support various open source projects. As developers, we know there is a definite benefit gained with using open source software in some aspect of our project, whether it is used as a tool (e.g. Eclipse IDE, gcc, etc.) or integrated deeply into the project (e.g. Tomcat, Spring Framework, Apache HTTP, etc.).

In fact, most of us use this type of software without much thought to the issues surrounding its use. We hear about a great new open source software project, download the latest release, deploy the "Pet store" example, develop the "Hello World", and on we go using this software in every other project we work on. Open source software gives the developer more options, helps to increase the knowledge base among developers, and improves the overall programming prowess of the community.

With all the benefits associated with open source, developers might wonder what the issues are surrounding the use of open source software. What are the risks? Is this software really free? What if I need support? These, and several others discussed below, are all valid questions a developer should ask when considering whether an open source product should be incorporated into the developer's project.

Open Source: A Definition

Let's take a step back and define what open source software means. Some people use the term loosely to describe any software that is freely distributed with modifiable source code. An organization called the Open Source Initiative (OSI) maintains a more formal and strict definition on its web site). The OSI uses ten criteria in its definition, addressing issues such as the ability to freely distribute software, accessibility to source code, ability to derive works, and more.

The OSI also reviews and approves open source licenses to determine whether they meet the organization's standards for open source licensing. At the time of this writing, there are 58 different licenses approved by the OSI, including the GNU General Public License (GPL), the Apache License 2.0, and the Mozilla Public License 1.1 (MPL). Not all "open source" licenses, however, meet or exceed the OSI's criteria.

Communicating with Management

Typically, the developer finds management to be of two different mindsets when it comes to open source. They are either supportive because they believe it is an inexpensive solution, or they are not supportive because of the perceived risks associated with its use (this may stem from not understanding the paradigm, or not having the ability to purchase support).

In the first case, the developer may also find management to be overly enthusiastic in the use of open source software. This usually stems from the initial cost-savings aspect of using open source software. Often times, management hears how open source will ultimately save a project a lot of money. This may be the main reason why management will opt for the use of open source software. An example of this may be the migration of a software development project that has been implemented using a non-open source database such as Oracle 10g to an open source database (e.g. MySQL, PostgreSQL, or Firebird).

However, management may not understand the risks associated with migrating from one database to another. They may also be unaware of the process, time, and resources necessary to complete this project successfully. The developer may need to explain to management the risks and any potential roadblocks that could be encountered during the migration.

In the second case, the notion of open source software may seem enigmatic to management. The fact that source code is available may be thought of as a security risk for the company. Management may treat the lack of up-front cost for some open source projects as adding to overall risk. They may even think the product is not serious enough to be used in a mainstream production environment or software product.

In this scenario, the developer must take on the responsibility to educate management as to the capabilities of the open source product, and how the use of it can both help the "bottom-line" as well as improve the quality of the project.

As discussed in the next section, management should also know about the licensing ramifications associated with distributing any type of software. Often times, the developer is the only one who fully understands the license. In fact, many times the license is only available in the compressed source download. (Interestingly enough, it is a requirement to read the license before proceeding with uncompressing the file for some open source projects.)

Again the responsibility falls to the developer to properly educate management in any licensing issues that might occur. If management does not appear to understand the notion of open source, the developer may need to provide any necessary information as it pertains to open source prior to the explanation of the specific license. At times, the developer may be the sole educator in terms of open source understanding and licensure in the company.

Licensing

I know what you are saying, "Who really reads those, anyways?" Well, I do (the lawyers told me to say that). Remember what your parents said? Eat your veggies, take your vitamins, button up before you go out in the cold, and make sure to read that license before using any open source software (I would make sure to read licenses of non-open source software too, even though those are generally long and frightening).

Let's say a developer is creating an application that will be sold commercially, and let's say the developer integrates a component released under the Apache license. Using examples from a few different open source licenses, typical questions include,

• What are some of the specifics we must take into account in distributing the Mozilla software product with my application?

• Can I use parts of an Apache product in my application but not the whole thing?

• Since I am using a GNU product (which is released under the GNU GPL license), do I have to release my software under the GPL License?

Let's take the first situation. Assume the product we are selling requires Mozilla Firefox to be included in the distribution. In addition to this, let's assume the Firefox source code was modified. The first thing to determine is which Mozilla license Firefox is distributed under. According to Mozilla.org, it's products are distributed under the MPL or the Mozilla End-User-License-Agreement (EULA). Looking at the license file on my Firefox installation, I noticed the MPL 1.1 is what Firefox is distributed with for Windows.

Next, it is necessary to determine what was done to the Firefox browser. Since the Firefox source code was modified, there are procedures that must be followed to properly re-distribute the software. For instance, it is necessary to make source code available for modifications that were performed on Firefox. In addition, it is necessary to 'label' the changes that were made to the Firefox source code. According to the Mozilla.org site, it is possible to do this by way of diffs. The FAQs posted on the Mozilla.org site contain more information regarding the specifics that need to be taken into account in this situation.

The second scenario involves using a component of an Apache product, but not the whole thing. Let's say we are interested in using parts of the Apache Jakarta Commons code base. Apache makes separate components available for download such as the commons-pool component. This is a set of JAVA packages used for object pooling. The Apache License 2.0 addresses "Derivative Works" which is defined as any work in Source or Object form that is based on the specific Apache software project code. Similar to the Mozilla license, it is necessary to annotate any changes that were made to the source code if we choose to only use parts of it, and then properly include the license file as well as a place for anybody to obtain the modified source.

Our third scenario involves GNU. This one gets a little tricky. If you choose to use code that is distributed by GNU and it is distributed under the General Public License (GPL), then your code must conform to the GNU license. In fact, the license you must use to distribute your product may have to be the same GNU license. This is where open interpretation can get a little messy.

Traditionally, software that uses a library that is released under the GPL is required to be distributed under the same GPL license if the developer chooses to distribute the product (note: The GPL allows the developer not to distribute the product). This is not to say that it is not possible to use GNU-licensed software and proprietary software in conjunction with each other. To do this, the software must be logically separate as mandated by GNU 

Since these licenses all conform to the notion of what the OSI terms "open source" the answers should be very similar. However, there are slight differences that must be taken into account.

What happens if the open source license does not conform to the OSI specification? This does not necessarily mean that you should not consider using the product. But you do have to read the license and make sure you understand the terms and conditions.

Some non-OSI-compliant licenses are attached to software that is developed and released for academic purposes. This often occurs with the usage of university-based software development projects. Commercial users may find these licenses to be more strict.

You might also encounter "dual-licensing." One example of this is the MySQL database which is released under two licenses, one of which conforms to the OSI standards.

Project Management

There are also project management concerns that must be taken into account when adopting open source software as part of a development project. For example, you may need to allot more time in the initial phases of the project to allow the development team time for discovery and learning. A proof of concept/prototype phase may be a good idea. This may be very important especially if the development team has never used the product in the past.

The process should take into account risk management issues associated with the maturity of the open source product and developer knowledge of the product. Many times, the risks will only be understood by the developer, who will need to help the management team understand the risks.

Another area that can contribute to project management risk is the quality of documentation for the open source product. Although a diminishing stereotype, there are still open source projects that lack documentation. Developers will be developers. The expectation that code is good enough to understand the software may indicate that a project that is still in the inception phase.

The Proof of Concept phase becomes very important in this case because the developer will need to determine the viability of the software in question. This is not to say that all open source software is immature. In fact many open source projects contain more functionality and fewer bugs than Commercial Off the Shelf (COTS)-based equivalents.

Risk management may play an increased role depending on the maturity of the product. Although not ultimately a means of determining software maturity, a critical mass of developers on a project can speak to how stable a project is, and how much support the project will have. There are open source projects (like those of the Apache Software Foundation) that have over one thousand people looking at the source code, submitting bugs, patches, and enhancements. There is a full development lifecycle that is properly defined, documented, and followed.

Because of the high level of maturity, from a risk management standpoint many organizations treat software developed by well established organizations like the Apache Software Foundation or JBoss, Inc. as they would treat COTS-based software. There are also projects that are maintained by only one person—but these may be mature, well documented projects also. Evaluate on a case-by-case basis.

Support

Support works a little differently in the world of open source. For some projects, it is possible to purchase a support contract from either the creators of the software, or from a consulting company that is either partnered with the organization or that employs a contributing developer to the project. Other times, the only means of support include mailing lists, archives, a wiki, and documentation.

Management may see the lack of paid-for support as contributing to the overall risk of the project. The reality, however, is that having access to the developers on the project results in more expedited answers than going through the multiple levels of support one would find when contacting companies of COTS-based products. The developer may have to explain the support risks associated with an open source product and how that plays into the overall project risk matrix.

Etiquette

There are some unwritten rules a developer must follow when communicating with the open source community. First, the developer is expected to have read through the license, documentation, any available wikis, and most importantly the mail/discussion archives. It is the responsibility of the developer to seek out any answers using this medium before asking a question that was previously asked. If an error is found, it may have come up in the past and was addressed in the mail, newsgroup, or discussion forum archives.

Developers are also expected to install the software, deploy any examples available, and attempt to develop a "Hello World." If the developer is not successful at installing the software, then the expectation is that the developer will proceed to read through the installer/build scripts and any necessary source code, make modifications to the environment or build scripts and attempt to re-install the software.

If still not successful, the developer can then send mail or post a message to the community annotating what was performed, and what error is occurring. Sending a message stating that something does not work without having done the proper due diligence to figure it out for yourself is poor etiquette.

Although most people in the open source community do respond with advice regardless of whether the message was deemed intelligent, it is in your best interest to do your homework first.

Conclusion

The benefits associated with open source software can be realized as long as there is a plan to mitigate any risks that are part of using open source products. The ubiquity of open source software is increasing. Whether used as a small tool for software development, or as the core infrastructure for a COTS-based product, open source software will be incorporated into many specifics of an increasing number of software projects this year. Being aware of the specifics and issues around open source software will help the software developer plan appropriately for project success.

Source:-http://www.developerdotstar.com/mag/articles/lancaon_open_source.html

Monday, 05 December 2016 05:02

DevOps – A Collaborative Approach

Written by

There are lots of different opinions about what encompasses the definition of DevOps. Speaking in very broad terms, born to improve the IT service delivery agility, DevOps facilitates collaboration, communication and integration between IT operations and software developers. DevOps environment consists of a team with cross-functional team members including QA, developers, business analysts, DBAs, operations engineers and so on. Incorporating DevOps helps companies get done more, and deploy code more frequently.

Businesses these days are facing some common problems. After application delivery, businesses are sceptical to change. The reason usually is the vulnerable and brittle software, and the platform which it sits on. Software is risky, prone to errors, and is unpredictable. Introducing new features or fixing application problems takes long time mainly due to bureaucratic change-management system. There is also risky deployment where no one is completely confident if the software will actually work in the live environment, if code will cope with the load, or if code will work as expected. The product is usually pushed out, and teams just hope to see if everything works. More often than not, the problem start manifesting after the project goes live. The developers use a system to develop the code, which is tested in completed different system, and deployed on entirely different machines, causing incompatibility issues due to different properties files. If the business units are siloed, the issues get passed between different teams. There can be siloisation within teams as well. If the silos are not in the same office, or city, this leads to “them vs us” mentality, making people more sceptical.

DevOps approach believes in handling businesses in a more productive and profitable manner by building teams and software to resolve these issues. The above mentioned problems can be addressed by DevOps approach where people with multidisciplinary skill set are happy to roll up their sleeves for multidimensional role. They make connections and bridge gaps, tremendously impacting the businesses. This builds cross-disciplinary approach within the teams with maximum reliability across different departments, leading to faster time to market, happier clients, better availability and reliability and more focussed team energy. The goals of DevOps approach are spread across complete delivery pipeline, improving the deployment frequency. DevOps promotes sets of methods and processes for collaboration and communication between product development, quality assurance and IT operations. It encourages understanding the domain for which software is being written, develop communication skills, and there is a conscious passion and sensitivity to ensure that the business succeeds.

In the non-DevOps environment, the operations team’s performance is measured based on the stability of the system, whereas the development team is gauged based on the delivered features. In the DevOps environment, a single whole team is responsible for the system stability and delivering new features. There is continuous integration, shared code, automated deploys, and test-driven techniques. The problems get exposed earlier in the application code, configuration or infrastructure mainly because software is not just thrown to the Operations once the coding is over. The change sets are smaller, making the problems less complex and as the team members do not have to wait for other team to find and fix the problem, resolution times are much faster.

Additionally, in a typical IT environment, people need to wait for other machines, other people, or updated software. Employees often get stuck in resolving the same issues over and over again, and this can become quite frustrating, leading to job frustration. It becomes essential for the organisations to remove the ungratifying part of their employees’ jobs so that they can add more value to the organisation, making it more productive and profitable. Standardized production environments and automated deployments are the main aspects of DevOps that make the deployments predictable, and this frees up the resources from the mundane tasks. This software development method acknowledges and utilizes the interdependence of IT operations, software development and quality assurance to help companies create new products faster, while improving the operations performance.

There are several technical and business benefits of this collaboration across different roles. This includes continuous software delivery, faster problem resolution, reduced complexity of the problems, more stable operating environments, faster feature delivery and more time to provide value addition rather than fixing or maintaining. The DevOps movement is yet to reach its full potential, and the statistics have shown that this is not just a fleeting fad. It promises a paradigm shift, a significant revolution in the software industry to blur the boundaries.

Source:-http://www.idexcel.com/blog/tag/devops/

About Manomaya

Manomaya is a Total IT Solutions Provider. Manomaya Software Services is a leading software development company in India providing offshore Software Development Services and Solutions

From the Blog

05 July 2018
29 June 2018