Blog

Monday, 22 August 2016 04:49

Ways a Copywriter Can Help a Small Business

Written by

If you ask a small business owner to give a presentation in public, you can see beads of perspiration forming almost immediately. That is, if your attention isn’t drawn to their knees knocking or their leg imitating a piston right where they sit.

While public speaking is a known and easy-to-admit fear of many, writing is a more subtle fear. Accepting the task is easy, but when it comes to stringing words together, many business owners are like a deer in headlights.

Just as the best speakers value speech writers, and professional athletes have coaches, business owners need copywriters and copy editors. Here are a few areas where most small businesses could use a wordsmith:

 

WRITING BIG PIECES

  • Web Page Copy: There are plenty of small business web pages filled with mangled text, piecemealed and pasted from multiple sources, not always from their own pages or brochures.
  • Ebooks: Every business owner likes to think they are an expert in their field (ask their employees). Every business owner would like a bigger mailing list. Ebooks can prove the former while building the latter.
  • Press Releases: While some writers think press releases are a thing of the past, small business owners (your potential clients) do not. In smaller towns, well written press releases can mean local media coverage.
  • Blog Posts: This type of writing is the mud where many businesses get stuck. Providing conversational copy, relevant to a targeted audience, with a clear call-to-action, often means bigger profits in less time (and agony).
  • Articles: Most business owners are not familiar with terms like article marketing, advertorials, or native advertising. They like the concept, but shy away from the writing.
  • Sales Pages and Landing Pages: Most small business owners are familiar with the concept of long-form sales letters. Few are adept at putting the copy, the callouts, and the calls-to-action together.  

SMALL BITS & PIECES

  • Commercials: Television and radio remain a popular platform for small business advertising dollars, especially in smaller local markets. As you know, short-and-quick is not always synonymous with clear-and-concise.
  • Catalog or Product Descriptions: Small blurbs like product descriptions, catalog copy and menu items are often difficult for a small business owner. It becomes an exercise of the ketchup trying to read its own label – from inside the bottle.
  • Email: Canned responses are time savers. Template sales emails or inquiry emails can also save time and increase outreach. When written clear and concise, a small business can send these emails with confidence.
  • Display Ad Copy: From taglines to internal signage, chamber directory ads to phone books, some small businesses haven’t changed their ad copy in years.
  • Brochures: Still viewed as an expected leave behind, sales collateral like brochures and sales cards hold a lot of value to some businesses. Creativity can be a key in creating a library of copy to be used elsewhere within the business.
  • Status Updates: If a small business is active in social media, status updates read more like commercials than conversations.

 MOUTH PIECES

  •  Speeches and Presentations: Whether the full body of the speech or an outline, some of the best presenters tap into the strengths of a writer. Presentation slideshows are often in need of a good writer or editor.
  •  Profiles and Bios: A lot of business leaders have difficulty writing about themselves. Bio pages on the web, in print or media kits, and social media profiles can all use the touch of a professional writer.
  •  Video and Podcast Scripts: The “ums” and “ers” along with the always popular “so” and “basically” fill video voice overs and podcast episodes across the mediums. Good writing and a tool like CuePrompter will make your clients sound eloquent when they say the words you’ve written.
  •  Transcription and Re-purposing: Smart business owners are starting to realize the value of recorded presentations or conversations, capturing large portions or small money quotes they can use elsewhere. A writer or editor who can extract the value from the whole is an asset to the company.

SPECIALTY PIECES

  • SEO Copywriting: Writing title tags, headlines, and meta data is a specialized writing skill all its own. Recognizing how to improve copy for findability is also a strength many business don’t have internally. SEO copywriting is one of the most sought after types of writing.
  • Infographics: This style of writing also requires talents for both research and design. Being able to partner with a graphics person can strengthen the copy and the flow.
  • Tutorials: Technical writing or instructional manuals are very important to many kinds of businesses. Small businesses with a high turnover of employment are often seeking operational guidelines for new employees. Manufacturers are always on the lookout for a simpler way to teach customers how to use their products.
  • Grant Writing: If writing blog posts strikes fear into the minds of a business owner, grant writing can send them running for cover.
  • Policies: Terms of service, disclaimers, and codes of conduct are sought after as more businesses launch their own websites. This type of writing often includes a back-and-forth approval process with a legal department.

A lot of small business owners will avoid writing at all costs, sometimes delegating to someone within their company. Not every business will use all of the writing types listed above. It’s likely they haven’t yet considered the possibility of most of them.

 

source:-http://seocopywriting.com/21-ways-copywriter-can-help-small-business/

 

Saturday, 20 August 2016 04:45

Operating System - Process Scheduling

Written by

Definition

The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems allow more than one process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing.

Process Scheduling Queues

 The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate queue for each of the process states and PCBs of all processes in the same execution state are placed in the same queue. When the state of a process is changed, its PCB is unlinked from its current queue and moved to its new state queue. The Operating System maintains the following important process scheduling queues −

  • Job queue − This queue keeps all the processes in the system.
  • Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting to execute. A new process is always put in this queue.
  • Device queues − The processes which are blocked due to unavailability of an I/O device constitute this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS scheduler determines how to move processes between the ready and run queues which can only have one entry per processor core on the system; in the above diagram, it has been merged with the CPU. 

Two-State Process Model

Two-state process model refers to running and non-running states which are described below −

Sl.No State & Description
1

Running

Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in the queue is a pointer to a particular process. Queue is implemented by using linked list. Use of dispatcher is as follows. When a process is interrupted, that process is transferred in the waiting
2

Not Running

Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in the queue is a pointer to a particular process. Queue is implemented by using linked list. Use of dispatcher is as follows. When a process is interrupted, that process is transferred in the waiting queue. If the process has completed or aborted, the process is discarded. In either case, the dispatcher then selects a process from the queue to execute.

Schedulers

Schedulers are special system software which handle process scheduling in various ways. Their main task is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of three types −

  • Long-Term Scheduler
  • Short-Term Scheduler
  • Medium-Term Scheduler

Long Term Scheduler

It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the system for processing. It selects processes from the queue and loads them into memory for execution. Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is stable, then the average rate of process creation must be equal to the average departure rate of processes leaving the system.

On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating systems have no long term scheduler. When a process changes the state from new to ready, then there is use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system performance in accordance with the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler selects a process among the processes that are ready to execute and allocates CPU to one of them.

Short-term schedulers, also known as dispatchers, make the decision of which process to execute next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-processes.

A running process may become suspended if it makes an I/O request. A suspended processes cannot make any progress towards completion. In this condition, to remove the process from memory and make space for other processes, the suspended process is moved to the secondary storage. This process is called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to improve the process mix.  

Context Switch

A context switch is the mechanism to store and restore the state or context of a CPU in Process Control block so that a process execution can be resumed from the same point at a later time. Using this technique, a context switcher enables multiple processes to share a single CPU. Context switching is an essential part of a multitasking operating system features.

When the scheduler switches the CPU from executing one process to execute another, the state from the current running process is stored into the process control block. After this, the state for the process to run next is loaded from its own PCB and used to set the PC, registers, etc. At that point, the second process can start executing.  

 

Context switches are computationally intensive since register and memory state must be saved and restored. To avoid the amount of context switching time, some hardware systems employ two or more sets of processor registers. When the process is switched, the following information is stored for later use.

  • Program Counter
  • Scheduling information
  • Base and limit register value
  • Currently used register
  • Changed State
  • I/O State information
  • Accounting information  

source:-http://www.tutorialspoint.com/operating_system/os_process_scheduling.htm

Thursday, 18 August 2016 05:01

What is Virtualization?

Written by

Virtualization is the process of creating a software-based (or virtual) representation of something rather than a physical one. Virtualization can apply to applications, servers, storage, and networks and is the single most effective way to reduce IT expenses while boosting efficiency and agility for all size businesses.

HOW IT WORKS

Virtualization 101

IT organizations are challenged by the limitations of today’s x86 servers, which are designed to run just one operating system and application at a time. As a result, even small data centers have to deploy many servers, each operating at just 5 to 15 percent of capacity—highly inefficient by any standard.

Virtualization uses software to simulate the existence of hardware and create a virtual computer system. Doing this allows businesses to run more than one virtual system – and multiple operating systems and applications -- on a single server. This can provide economies of scale and greater efficiency.

The Virtual Machine

A virtual computer system is known as a “virtual machine” (VM): a tightly isolated software container with an operating system and application inside. Each self-contained VM is completely independent. Putting multiple VMs on a single computer enables several operating systems and applications to run on just one physical server, or “host”.

A thin layer of software called a hypervisor decouples the virtual machines from the host and dynamically allocates computing resources to each virtual machine as needed.

Key Properties of Virtual Machines

VMs have the following characteristics, which offer several benefits.

Partitioning

  •  Run multiple operating systems on one physical machine
  •  Divide system resources between virtual machines

Isolation

  •  Provide fault and security isolation at the hardware leve
  • l Preserve performance with advanced resource controls

Encapsulation

  • Save the entire state of a virtual machine to files
  • Move and copy virtual machines as easily as moving and copying files

Hardware Independence

  • Provision or migrate any virtual machine to any physical server

source:-http://www.vmware.com/solutions/virtualization.html

Tuesday, 16 August 2016 05:28

What Is Cryptography?

Written by

Cryptography is the science of providing security for information. It has been used historically as a means of providing secure communication between individuals, government agencies, and military forces. Today, cryptography is a cornerstone of the modern security technologies used to protect information and resources on both open and closed networks.

Modern cryptography concerns itself with the following four objectives:

1) Confidentiality (the information cannot be understood by anyone for whom it was unintended)

2) Integrity (the information cannot be altered in storage or transit between sender and intended receiver without the alteration being detected)

3) Non-repudiation (the creator/sender of the information cannot deny at a later stage his or her intentions in the creation or transmission of the information)

4) Authentication (the sender and receiver can confirm each other?s identity and the origin/destination of the information)

Procedures and protocols that meet some or all of the above criteria are known as cryptosystems. Cryptosystems are often thought to refer only to mathematical procedures and computer programs; however, they also include the regulation of human behavior, such as choosing hard-to-guess passwords, logging off unused systems, and not discussing sensitive procedures with outsiders.

The word is derived from the Greek kryptos, meaning hidden. The origin of cryptography is usually dated from about 2000 BC, with the Egyptian practice of hieroglyphics. These consisted of complex pictograms, the full meaning of which was only known to an elite few. The first known use of a modern cipher was by Julius Caesar (100 BC to 44 BC), who did not trust his messengers when communicating with his governors and officers. For this reason, he created a system in which each character in his messages was replaced by a character three positions ahead of it in the Roman alphabet.

In recent times, cryptography has turned into a battleground of some of the world's best mathematicians and computer scientists. The ability to securely store and transfer sensitive information has proved a critical factor in success in war and business.

Because governments do not wish certain entities in and out of their countries to have access to ways to receive and send hidden information that may be a threat to national interests, cryptography has been subject to various restrictions in many countries, ranging from limitations of the usage and export of software to the public dissemination of mathematical concepts that could be used to develop cryptosystems. However, the Internet has allowed the spread of powerful programs and, more importantly, the underlying techniques of cryptography, so that today many of the most advanced cryptosystems and ideas are now in the public domain.

http://searchsoftwarequality.techtarget.com/definition/cryptography

Saturday, 13 August 2016 04:28

What to Do to Improve Website Speed?

Written by

If you want to improve website speed, there are a couple of steps to be taken. First, you need to measure your website speed – otherwise how do you know it's slow?

1. Measure Load Times

In order to measure load times, you need a good tool. The choice here is quite rich. Pingdom Page Load Time tool and Google Analytics Site Speed reports give a good idea of your site's general performance. WebPageTest is a more advanced tool because it allows to test your site in different browsers and spot slow areas on your site.

These tests could take some time for a large site but since they give you detailed data about which parts are slow, just be patient. Good tools report not only the average site speed but elements, such as first byte, user time, time to fully load, percentage of images, htmls, JavaScript files, etc., which is useful later when you start fixing the problematic areas.

2. Move to a Faster Server

One of the obvious reasons a site is slow is that the server you are hosting it on is slow. The reasons here could be numerous – from a web hosting provider that lacks the capacity to offer fast servers, to the type of your hosting account.

The easier solution here is to upgrade your account. For instance, if you have a large site with many pages and frequent database reads/writes and you are still using a shared account, then no provider to Earth can offer the speed you need. In this case, if you are happy with the provider per se, your solution is to upgrade from a shared account to VPS (Virtual Private Server) or even to a dedicated server. The costs for VPS or a dedicated server a month are much higher than what you are paying for your shared account but if your site is making you money (or at least has the potential to), the problem with website speed is literally killing your business.

On the other hand, if your web hosting provider is not good even if you upgrade your account, this won't solve your problem. The only thing you can do is migrate your sites to a good web hosting provider. Here is a list of some of the best web hosting providers for you to choose from.

3. Optimize Your Site's Code and Images

Your server might be fast but if your site itself is slow, you will still experience speed issues. If your code and images are not optimized for fast loading, you won't see speed improvements till you fix them. This task could take a very, very long time, especially if your code and images are bloated but you've got to do it.

For images, you can use compression and/or smaller sizes. This will speed loading big time. For HTML, CSS, JavaScript, PHP and other Web languages there are tons of tricks (and tools) how to optimize your code.

Website speed is not a factor with huge importance for search engine rankings, though it does count. The bigger problem with slow sites is that they are not user–friendly, which in turn kills conversions. If you don't want to lose money because of the speed issues of your site, take the time to fix them – it will pay in the long run.

 source:-http://www.webconfs.com/website-speed-and%20-search-rankings-article-64.php

Thursday, 11 August 2016 05:09

The history of computer data storage, in pictures

Written by

Nowadays we are used to having hundreds of gigabytes of storage capacity in our computers. Even tiny MP3 players and other handheld devices usually have several gigabytes of storage. This was pure science fiction only a few decades ago. For example, the first hard disk drive to have gigabyte capacity was as big as a refrigerator, and that was in 1980. Not so long ago!

Pingdom stores a lot of monitoring data every single day, and considering how much we take today’s storage capacity for granted, it’s interesting to look back and get things in perspective. Here is a look back at some interesting storage devices from the early computer era.

THE SELECTRON TUBE

The Selectron tube had a capacity of 256 to 4096 bits (32 to 512 bytes). The 4096-bit Selectron was 10 inches long and 3 inches wide. Originally developed in 1946, the memory storage device proved expensive and suffered from production problems, so it never became a success.

 Above: The 1024-bit Selectron.

PUNCH CARDS

Early computers often used punch cards for input both of programs and data. Punch cards were in common use until the mid-1970s. It should be noted that the use of punch cards predates computers. They were used as early as 1725 in the textile industry (for controlling mechanized textile looms).  

Above: Card from a Fortran program: Z(1) = Y + W(1)

 Above left: Punch card reader. Above right: Punch card writer.

PUNCHED TAPE

Same as with punch cards, punched tape was originally pioneered by the textile industry for use with mechanized looms. For computers, punch tape could be used for data input but also as a medium to output data. Each row on the tape represented one character.

 Above: 8-level punch tape (8 holes per row).

MAGNETIC DRUM MEMORY

Invented all the way back in 1932 (in Austria), it was widely used in the 1950s and 60s as the main working memory of computers. In the mid-1950s, magnetic drum memory had a capacity of around 10 kB.

Above left: The magnetic Drum Memory of the UNIVAC computer. Above right: A 16-inch-long drum from the IBM 650 computer. It had 40 tracks, 10 kB of storage space, and spun at 12,500 revolutions per minute.

THE HARD DISK DRIVE

The first hard disk drive was the IBM Model 350 Disk File that came with the IBM 305 RAMAC computer in 1956. It had 50 24-inch discs with a total storage capacity of 5 million characters (just under 5 MB).

Above: IBM Model 350, the first-ever hard disk drive.

The first hard drive to have more than 1 GB in capacity was the IBM 3380 in 1980 (it could store 2.52 GB). It was the size of a refrigerator, weighed 550 pounds (250 kg), and the price when it was introduced ranged from $81,000 to $142,400.

Above left: A 250 MB hard disk drive from 1979. Above right: The IBM 3380 from 1980, the first gigabyte-capacity hard disk drive.

THE LASERDISC

We mention it here mainly because it was the precursor to the CD-ROM and other optical storage solutions. It was mainly used for movies. The first commercially available laserdisc system was available on the market late in 1978 (then called Laser Videodisc and the more funkily branded DiscoVision) and were 11.81 inches (30 cm) in diameter. The discs could have up to 60 minutes of audio/video on each side. The first laserdiscs had entirely analog content. The basic technology behind laserdiscs was invented all the way back in 1958.

Above left: A Laserdisc next to a regular DVD. Above right: Another Laserdisc.

THE FLOPPY DISC

The diskette, or floppy disk (named so because they were flexible), was invented by IBM and in common use from the mid-1970s to the late 1990s. The first floppy disks were 8 inches, and later in came 5.25 and 3.5-inch formats. The first floppy disk, introduced in 1971, had a capacity of 79.7 kB, and was read-only. A read-write version came a year later.

Above left: An 8-inch floppy and floppy drive next to a regular 3.5-inch floppy disk. Above right: The convenience of easily removable storage media.

MAGNETIC TAPE

Magnetic tape was first used for data storage in 1951. The tape device was called UNISERVO and was the main I/O device on the UNIVAC I computer. The effective transfer rate for the UNISERVO was about 7,200 characters per second. The tapes were metal and 1200 feet long (365 meters) and therefore very heavy.

Above left: The row of tape drives for the UNIVAC I computer. Above right: The IBM 3410 Magnetic Tape Subsystem, introduced in 1971.

And of course, we can’t mention magnetic tape without also mentioning the standard compact cassette, which was a popular way of data storage for personal computers in the late 70s and 80s. Typical data rates for compact cassettes were 2,000 bit/s. You could store about 660 kB per side on a 90-minute tape.

Above left: The standard compact cassette. Above right: The Commodore Datassette is sure to bring up fond memories for people who grew up in the 80s.

There are so many interesting pictures from “the good old days” when you look around on the web. These were some of the best we could find, and we hope you liked them.

PICTURE SOURCES: The Selectron. The punch card. The punch card reader and writer. Punched tape 1 and 2.UNIVAC magnetic drum. IBM 650 computer magnetic drum. The IBM Model 350 Disk File. 250 MB hard drisk drive from 1979. The IBM 3380. Laserdisc vs DVD. Held Laserdisc. 8-inch floppy drive. 8-inch floppy in use. UNISERVO and UNIVAC I. The IBM 3410. The compact cassette. The Datassette.

Article source:-http://royal.pingdom.com/2008/04/08/the-history-of-computer-data-storage-in-pictures/  

Monday, 08 August 2016 04:49

What Is Angular JS?

Written by

AngularJS is a structural framework for dynamic web apps. It lets you use HTML as your template language and lets you extend HTML's syntax to express your application's components clearly and succinctly. Angular's data binding and dependency injection eliminate much of the code you would otherwise have to write. And it all happens within the browser, making it an ideal partner with any server technology.

Angular is what HTML would have been, had it been designed for applications. HTML is a great declarative language for static documents. It does not contain much in the way of creating applications, and as a result building web applications is an exercise in what do I have to do to trick the browser into doing what I want?

The impedance mismatch between dynamic applications and static documents is often solved with:

• a library - a collection of functions which are useful when writing web apps. Your code is in charge and it calls into the library when it sees fit. E.g., jQuery.

• frameworks - a particular implementation of a web application, where your code fills in the details. The framework is in charge and it calls into your code when it needs something app specific. E.g., durandal, ember, etc.

Angular takes another approach. It attempts to minimize the impedance mismatch between document centric HTML and what an application needs by creating new HTML constructs. Angular teaches the browser new syntax through a construct we call directives. Examples include:

• Data binding, as in {{}}.

• DOM control structures for repeating, showing and hiding DOM fragments.

• Support for forms and form validation.

• Attaching new behavior to DOM elements, such as DOM event handling.

• Grouping of HTML into reusable components.

A complete client-side solution

Angular is not a single piece in the overall puzzle of building the client-side of a web application. It handles all of the DOM and AJAX glue code you once wrote by hand and puts it in a well-defined structure. This makes Angular opinionated about how a CRUD (Create, Read, Update, Delete) application should be built. But while it is opinionated, it also tries to make sure that its opinion is just a starting point you can easily change. Angular comes with the following out-of-the-box:

• Everything you need to build a CRUD app in a cohesive set: Data-binding, basic templating directives, form validation, routing, deep-linking, reusable components and dependency injection.

• Testability story: Unit-testing, end-to-end testing, mocks and test harnesses.

• Seed application with directory layout and test scripts as a starting point.

Angular's sweet spot

Angular simplifies application development by presenting a higher level of abstraction to the developer. Like any abstraction, it comes at a cost of flexibility. In other words, not every app is a good fit for Angular. Angular was built with the CRUD application in mind. Luckily CRUD applications represent the majority of web applications. To understand what Angular is good at, though, it helps to understand when an app is not a good fit for Angular.

Games and GUI editors are examples of applications with intensive and tricky DOM manipulation. These kinds of apps are different from CRUD apps, and as a result are probably not a good fit for Angular. In these cases it may be better to use a library with a lower level of abstraction, such as jQuery.

The Zen of Angular

Angular is built around the belief that declarative code is better than imperative when it comes to building UIs and wiring software components together, while imperative code is excellent for expressing business logic.

• It is a very good idea to decouple DOM manipulation from app logic. This dramatically improves the testability of the code.

• It is a really, really good idea to regard app testing as equal in importance to app writing. Testing difficulty is dramatically affected by the way the code is structured.

• It is an excellent idea to decouple the client side of an app from the server side. This allows development work to progress in parallel, and allows for reuse of both sides.

• It is very helpful indeed if the framework guides developers through the entire journey of building an app: From designing the UI, through writing the business logic, to testing.

• It is always good to make common tasks trivial and difficult tasks possible.

Angular frees you from the following pains:

• Registering callbacks: Registering callbacks clutters your code, making it hard to see the forest for the trees. Removing common boilerplate code such as callbacks is a good thing. It vastly reduces the amount of JavaScript coding you have to do, and it makes it easier to see what your application does.

• Manipulating HTML DOM programmatically: Manipulating HTML DOM is a cornerstone of AJAX applications, but it's cumbersome and error-prone. By declaratively describing how the UI should change as your application state changes, you are freed from low-level DOM manipulation tasks. Most applications written with Angular never have to programmatically manipulate the DOM, although you can if you want to.

• Marshaling data to and from the UI: CRUD operations make up the majority of AJAX applications' tasks. The flow of marshaling data from the server to an internal object to an HTML form, allowing users to modify the form, validating the form, displaying validation errors, returning to an internal model, and then back to the server, creates a lot of boilerplate code. Angular eliminates almost all of this boilerplate, leaving code that describes the overall flow of the application rather than all of the implementation details.

• Writing tons of initialization code just to get started: Typically you need to write a lot of plumbing just to get a basic "Hello World" AJAX app working. With Angular you can bootstrap your app easily using services, which are auto-injected into your application in a Guice-like dependency-injection style. This allows you to get started developing features quickly. As a bonus, you get full control over the initialization process in automated tests.

Source:- https://docs.angularjs.org/guide/introduction

Friday, 05 August 2016 04:32

What is Data Warehousing?

Written by

A data warehouse is a subject-oriented, integrated, time-variant and non-volatile collection of data in support of management's decision making process.

Data warehouse Archietecture

 Different data warehousing systems have different structures. Some may have an ODS (operational data store), while some may have multiple data marts. Some may have a small number of data sources, while some may have dozens of data sources. In view of this, it is far more reasonable to present the different layers of a data warehouse architecture rather than discussing the specifics of any one system.

In general, all data warehouse systems have the following layers:

  •  Data Source Layer
  •  Data Extraction Layer
  •  Staging Area
  •  ETL Layer
  •  Data Storage Layer
  •  Data Logic Layer
  •  Data Presentation Layer
  •  Metadata Layer
  •  System Operations Layer

The picture below shows the relationships among the different components of the data warehouse architecture: 

Each component is discussed individually below: 

Data Source Layer

This represents the different data sources that feed data into the data warehouse. The data source can be of any format -- plain text file, relational database, other types of database, Excel file, etc., can all act as a data source.  

 Many different types of data can be a data source:

  • Operations -- such as sales data, HR data, product data, inventory data, marketing data, systems data.
  • Web server logs with user browsing data.
  • Internal market research data.
  • Third-party data, such as census data, demographics data, or survey data.

All these data sources together form the Data Source Layer. 

Data Extraction Layer

Data gets pulled from the data source into the data warehouse system. There is likely some minimal data cleansing, but there is unlikely any major data transformation.

Staging Area

This is where data sits prior to being scrubbed and transformed into a data warehouse / data mart. Having one common area makes it easier for subsequent data processing / integration.

ETL Layer

This is where data gains its "intelligence", as logic is applied to transform the data from a transactional nature to an analytical nature. This layer is also where data cleansing happens. The ETL design phase is often the most time-consuming phase in a data warehousing project, and an ETL tool is often used in this layer.

Data Storage Layer

This is where the transformed and cleansed data sit. Based on scope and functionality, 3 types of entities can be found here: data warehouse, data mart, and operational data store (ODS). In any given system, you may have just one of the three, two of the three, or all three types.

Data Logic Layer

This is where business rules are stored. Business rules stored here do not affect the underlying data transformation rules, but do affect what the report looks like.

Data Presentation Layer

This refers to the information that reaches the users. This can be in a form of a tabular / graphical report in a browser, an emailed report that gets automatically generated and sent everyday, or an alert that warns users of exceptions, among others. Usually an OLAP tool and/or a reporting tool is used in this layer.

Metadata Layer

This is where information about the data stored in the data warehouse system is stored. A logical data model would be an example of something that's in the metadata layer. A metadata tool is often used to manage metadata.

System Operations Layer This layer includes information on how the data warehouse system operates, such as ETL job status, system performance, and user access history.

Source:- http://www.1keydata.com/datawarehousing/data-warehouse-architecture.html

Wednesday, 03 August 2016 05:02

What’s Humming Bird?What does it mean for SEO?

Written by

Hummingbird is the Google algorithm as a whole. It's composed of four phases:

  1. Crawling, which collects information on the web;
  2. Parsing, which identifies the type of information collected, sorts it, and forwards it to a suitable recipient;
  3. Indexing, which identifies and associates resources in relation to a word and/or a phrase;
  4. Search, which...
  • Understands the queries of the users;
  • Retrieves information related to the queries;
  • Filters and clusters the information retrieved;
  • Ranks the resources; and
  • Paints the search result page and so answers the queries.  

This last phase, Search, is where we can find the “200+ ranking factors” (RankBrain included) and filters like Panda or anti-spam algorithms like Penguin.

Remember that there are as many search phases as vertical indices exist (documents, images, news, video, apps, books, maps...).

We SEOs tend to fixate almost exclusively on the Search phase, forgetting that Hummingbird is more than that.

This approach to Google is myopic and does not withstand a very simple logical square exercise.

  1. If Google is able to correctly crawl a website (Crawling);
  2. to understand its meaning (Parsing and Indexing);
  3. and, finally, if the site itself responds positively to the many ranking factors (Search);
  4. then that website will be able to earn the organic visibility it aims to reach.

If even one of the three elements of the logical square is missing, organic visibility is missing; think about non-optimized AngularJS websites, and you’ll understand the logic.

What does it mean for SEO?

Not much really, unless you’re bending or breaking the rules, a white hat approach will mean that SEO shouldn’t be affected. Despite the tired old cries of “SEO is Dead!” doing the usual rounds online, if you’re a publisher of quality content then Hummingbird will make no difference. For SEO professionals, it’s actually a good thing as it helps to weed out the black hats that make outrageous (and unfounded) claims that they can get your site on page one of Google search results within a week.

For content publishers and writers, Hummingbird is also a good thing, so long as the content being produced is worthwhile. The algorithm is intended to help get rid of irrelevant content and spam and put more weight on industry thought leaders and influence rs.

The authorship program goes hand-in-hand with this as it allows Google to find authors that produce high-quality content and rely on them to be ‘trusted’ authors.

Link building practices are also affected, as the algorithm seeks to find dodgy practices (such as poor quality guest blogging) by evaluating inbound links in a more complex manner.

source:-http://positionly.com/blog/seo/google-hummingbird-update, https://moz.com/blog/wake-up-seos-the-new-new-google-is-here-2016

       

The performance and security of an existing network are the two major aspects that can affect your business. A weak network performance and lack of security could leave you in an annoying situation. If you want to improve the performance and security of your network without buying any external hardware, then you should follow some advanced techniques and methods.

The top five ways that will make your network faster and also reconfigure your existing hardware to strengthen its security is as follows:-

1. Disk Striping

Most of the time, we think that network is one of the major problems, but it isn’t. If you have plenty of hard drives, you can merge them into one logical drive where your data is striped around them. However, there are some restrictions, but it will definitely increase the performance.

2. Remove Network Protocols

If you are still using Appletalk, TCP/IP, NetWare IPX, and NetBEUI protocols on your server, you should remove the ones you are not using. This will not only improve the performance but also tighten the security of your existing network.

3. Implement WAN Bandwidth Saving Models

You can find various technologies that automatically change the model of a network, which dramatically lower WAN utilization for latency. Some of the examples are content networking, Web Services, WAFS, and terminal services.

4. Balance your system Bus Load

Like disk striping, you don’t need all of your I/O for your NIC and your tape drives, and hard drives on the same bus. There are many servers that have few, so generate some ideas on how you can optimize this.

First of all, understand that data doesn’t move directly from the hard drive to the NIC if they are on the same bus. All the elements still have to communicate with your CPS, so if there is contention, it is quicker if they are on different busses.

5. Customize your TCP/IP Settings, Especially the Window Size

If you won’t be able to find the problem, especially the window size on your WAN , then you should customize your TCP/IP settings. Always remember the bandwidth delay product and also check your window size.

Wrapping up

These are the five most useful ways that will help you improve the performance and security of your network. Don’t hesitate in following these techniques. You can also share you own technique by leaving a comment right down there.

Source: Free Articles from ArticlesFactory.com  

About Manomaya

Manomaya is a Total IT Solutions Provider. Manomaya Software Services is a leading software development company in India providing offshore Software Development Services and Solutions

From the Blog

05 July 2018
29 June 2018