Blog

Experiencing fluctuations in organic traffic and rankings? Columnist Stephanie LeVonne has a list of factors to check and how to address them

Whether you’re a seasoned SEO or someone who runs your own business, you know there are fluctuations in your organic traffic, but you may struggle to pinpoint the root cause.

Organic search, unlike its paid counterpart, comes with a unique set of challenges in diagnosing a decline in traffic and conversions. There are some obvious places you can mine for insights (Google Analytics, Google Search Console), but other factors at play can be harder to quantify.

From basic issues to advanced issues to factors that are largely out of your control, following is a list of things to check for when diagnosing major fluctuations in organic traffic or search engine rankings. By examining both internal and external factors, you can start to piece together the puzzle.  

Basic issues

1. Your pages aren’t indexed

Conduct a quick Google search using “site:yourwebsite.com” to make sure your pages are actually indexed. If you’re noticing that critical pages aren’t appearing in the SERPs, you’ve likely found the culprit. Check your robots.txt file to make sure you haven’t blocked important pages or directories. If that looks good, check individual pages for a noindex tag.

2. Bot filters

Are you currently excluding all known bots and spiders in Google Analytics? If not, you may be experiencing inflated traffic metrics and not even know it. Typically, bots enter through the home page and cascade down throughout your site navigation, mimicking real user behavior. One telltale sign of bot traffic is a highly trafficked page with a high bounce rate, low conversions and a low average time on page.

While it’s best to create a custom dimension for filtering out bots, applying the generic bot filter is a good place to start. It’s important to note that filters cannot be applied retroactively, so if you’ve recently turned on this feature, you should be receiving less traffic. Additionally, double-check that you are filtering out your own traffic and IP address.

3. Recent site updates

If you’ve recently modified your on-page copy, undergone a site overhaul (removing pages, reordering the navigation) or migrated your site sans redirects, it’s reasonable to expect a decline in traffic. After reworking your site content, Google must re-crawl and then re-index these pages. It’s not uncommon to experience unstable rankings for up to a few weeks afterwards. If you’ve changed your URL structure or removed pages from your site, it’s important to have a 301-redirect strategy in place to preserve link equity and avoid a loss of rankings/traffic.

4. URL confusion

Do you have a content strategy in place, or are your efforts more “off the cuff?” Not having a clearly defined keyword map can spell trouble — especially if two or more pages are optimized for the same keyword. In practice, this will cause pages to compete against each other in the SERPs, potentially reducing the rankings of these pages. Here is an example of what this might look like:

• URL 1: www.mysite.com/cakes/raspberry-chocolate-cake

• URL 2: www.mysite.com/flavors/chocolate-raspberry-cake

Fortunately, if you have access to a keyword tracking tool, you should be able to see a day-by-day breakdown of which URLs Google chooses to rank for that particular keyword. With a little time and effort, you should be able to remedy the situation.

Advanced issues

5. Structured data markup

Implementing structured data markup (such as that from schema.org) might seem like a one-time project, but that “set it and forget it” mentality can land you in hot water. You should be monitoring the appearance of your rich snippets on a regular basis to ensure they are pulling in the correct information. As you change the content on your website, this can alter the markup without warning.

Likewise, depending on your back-end merchandising setup, products could be triggered to show “out of stock” schema if one color variation goes out of stock. As you can imagine, this can wreak havoc on your click-through rates and lead users to purchase from your resellers — or worse, your competitors!

6. Promotional cadence & the “Sale Hangover Effect”

Did you run a big promotion last year, such as a sample or flash sale? Did it coincide with the same week this year? If not, your year-on-year comparison will be skewed.

If so, were your past promotions equally enticing? Did your brand launch a new product line or offer limited-time products? These factors alone are difficult to measure, and we’re not even accounting for PR efforts, which will also impact your organic metrics.

There is also significant evidence to suggest that the “Sale Hangover Effect” is not just a phenomenon. It deals with two factors: share of mind and share of wallet.  

Wednesday, 20 July 2016 05:15

The Periodic Table Of SEO Success Factors

Written by

  Search engine optimization — SEO — may seem like alchemy to the uninitiated. But there is a science to it. Search engines reward pages with the right combination of ranking factors or “signals.” SEO is about ensuring your content generates the right type of signals.

Above chart summarizes the major factors to focus on for search engine ranking success

The olden days are a little older than you might think…

From the simplest to the most sophisticated, all computer programs rely on very simple instructions to perform basic functions: comparing two values, adding two numbers, moving items from one place to another. In modern systems, such instructions are generated by a compiler from a program in a high-level language, but early machines were so limited in memory and processing power that every instruction had to be spelled out completely, and mathematicians took up pencil and paper to manually work out formulas for configuring the machines – even before there were machines to configure.

“If you really want to look at the olden days, you want to start with Charles Babbage,” says Armando Solar-Lezama, assistant professor in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Babbage designed an analytical engine – a mechanical contraption outfitted with gears and levers – that could be programmed to perform complicated computations. His collaborator, Ada Lovelace (daughter of poet Lord Byron), recognized the potential of the machine, too, and in 1842 wrote what’s considered to be the first computer program. Herlengthy algorithm was created specifically for computing Bernoulli numbers on Babbage’s machine – had it ever actually been built.

By the early 20th century, though, working computers existed consisting of plug boards and cables connecting modules of the machine to one another. “They had giant switchboards for entering tables of values,” says Solar-Lezama. “Each row had a switch with 10 positions, one for each digit. The operator flipped the switches and reconfigured the plugs in order to set the values in the table.”

Before long, programmers realized it was possible to wire the machine in such a way that each row of switches would be interpreted as an instruction in a program. The machine could be reprogrammed by flipping switches rather than having to rewire it every time – not that writing such a program was easy. Even in later machines that used punched tapes or cards in place of switchboards, instructions had to be spelled out in detail. “If you wanted a program to multiply 5 + 7 by 3 + 2,” says Solar-Lezama, “you had to write a long sequence of instructions to compute 5+7 and put that result in one place. Then you’d write another instruction to compute 3+2, put that result in another place, and then write the instruction to compute the product of those two results.”

That painstaking process became a thing of the past in the late 1950s with Fortran, the first automated programming language. “Fortran allowed you to use actual formulas that anyone could understand,” says Solar-Lezama. Instead of a long series of instructions, programmers could simply use recognizable equations and linguistic names for memory addresses. “Instead of telling the computer to take the value in memory address 02739, you could tell it to use the value X,” he explains.

Today’s programming software can take programs written at a very high-level and compile them into sequences of billions of instructions that a computer can understand. But programmers are still faced with the task of specifying their computation at the correct level of detail, precision, and correctness. “Essentially, programming has always been about figuring out the right strategy for a machine to perform the computation that you want,”

source:- http://engineering.mit.edu/ask/how-did-people-olden-days-create-software-without-any-programming-software

Below is a visual history of "search" and search engines; hopefully it's both a trip down memory lane and a useful resource for anyone looking to learn a bit more about the history of Internet search engines. 

 WordStream's search engine history timeline is as shown below

 The History of Search Engines

Modern search engines are pretty incredible – complex algorithms enable search engines to take your search query and return results that are usually quite accurate, presenting you with valuable information nuggets amidst a vast information data mine.

Search engines have come a long way since their early prototypes, as our Internet Search Engines History infographic illustrates. From improvements in web crawlers and categorizing and indexing the web, to introducing new protocols such as robots.txt so that webmasters have control what web pages get crawled, the development of search engines has been the culmination of multiple search technologies that developed from different search engines. Alta Vista was the first search engine to process natural language queries; Lycos started strong with a system categorizing relevance signals, matching keywords with prefixes and word proximity; and Ask Jeeves introduced the use of human editors to match actual user search queries,

How Do Search Engines Work?

First of all, let's ask what is a search engine? A search engine is a program that searches the web for sites based on your keyword search terms. The search engine takes your keyword and returns search engine results pages (SERP), with a list of sites it deems relevant or connected to your searched keyword.

The goal for many sites is to appear in the first SERP for the most popular keywords related to their business. A site's keyword ranking is very important because the higher a site ranks in the SERP, the more people will see it.

SEO, or search engine optimization, is the method used to increase the likelihood of obtaining a first page ranking through techniques such as link building, SEO title tags, content optimization, meta description, and keyword research.

Google search engines and other major search engines like Bing and Yahoo use large, numerous computers in order to search through the large quantities of data across the web.

Web search engines catalog the world wide web by using a spider, or web crawler. These web-crawling robots were created for indexing content; they scan and assess the content on site pages and information archives across the web.

Algorithms and Determining the Best Search Engines

Different internet search engines use different algorithms for determining which web pages are the most relevant for a particular search engine keyword, and which web pages should appear at the top of the search engine results page.

Relevancy is the key for online search engines – users naturally prefer a search engine that will give them the best and most relevant results.

Search engines are often quite guarded with their search algorithms, since their unique algorithm is trying to generate the most relevant results. The best search engines, and often the most popular search engines as a result, are the ones that are the most relevant.

Search Engine History

Search engine history all started in 1990 with Archie, an FTP site hosting an index of downloadable directory listings. Search engines continued to be primitive directory listings, until search engines developed to crawling and indexing websites, eventually creating algorithms to optimize relevancy.

Yahoo started off as just a list of favorite websites, eventually growing large enough to become a searchable index directory. They actually had their search services outsourced until 2002, when they started to really work on their search engine.

History of Google Search Engine

Google's unique and improving algorithm has made it one of the most popular search engines of all time. Other search engines continue to have a difficult time matching the relevancy algorithm Google has created by examining a number of factors such as social media, inbound links, fresh content, etc.

As evidenced by the above infographic, Google appeared on the search engine scene in 1996. Google was unique because it ranked pages according to citation notation, in which a mention of one site on a different website became a vote in that site's favor. This was something that search engines.

Google also began judging sites by authority. A website's authority, or trustworthiness, was determined by how many other websites were linking to it, and how reliable those outside linking sites were.

Google search history can be witnessed by taking a look at Google's homepage progressions over the years. It's remarkable to see how basic and primitive the now most popular search engine once was.

Google Search Engine History: Looking In To the Past

A picture of the original 1997 Google search engine homepage, back when Google was part of stanford.edu. 

 

Google search engine homepage in 2005

The modern, minimalist Google of 2011.

source:-http://www.wordstream.com/articles/internet-search-engines-history

 

Programmers have always known that new programming languages need to be learned to keep their skills marketable in the workplace. That trend is not only continuing – it seems to be increasing due to the rate of change taking place in the technology sector.

Programming languages like C, C++, Java, HTML, Python, or PHP have always had answers to the demands of the market. However, progression in the innovation sector requires people to gain even more skills and knowledge to bring ideas to life.

Even though programming languages like Java, HTML, Objective C, remain the backbone of any development in IT, there have been some new and interesting programming languages that have gained impressive reviews and high ratings among the tech gurus across the world. Below are the list of new programming languages to learn and keep watch of in 2016

1.Google Go 

Google’s Go Programming Language was created in 2009 by three Google employees, Robert Griesemer, Rob Pike, and Ken Thompson. The language’s success can be seen clearly by the fact that BBC, SoundCloud, Facebook and UK Government’s official website are some of the notable users of Go. It is faster, easier to learn and does the same job that C++ or Java has been doing for us. As the creators said, “Go is an attempt to combine the ease of programming of an interpreted, dynamically typed language with the efficiency and safety of a statically typed, compiled language.

2. Swift

When a programming language is launched at the Apple’s WWDC, you can be sure that it has something that can deliver success and results. Swift was released in the Apple’s WWDC in 2014 and its exponential growth in just one year shows how capable and promising this language is. According to Apple, Swift brings the best of Python and Ruby together and adds modern programming fundamentals, to make it more effective and fun. If you’ve been using or were planning on learning Objective C to develop iOS apps, don’t bother learning it. Swift is the language you need to know moving forward. There will soon come a day when Objective C is used by nobody to develope apps.

3. Hack

Just like Swift, Hack is another programming language which has recently been launched and is a product of another tech giant, Facebook. In the past one year, Facebook has transformed almost their entire PHP codebase to Hack, and if a website with millions of users and unparalleled traffic can rely on Hack, then the programming language must surely be here to stay.  

4. Rust

The Rust Programming Language was launched in 2014 by Mozilla. It did not receive the immediate success that Hack and Go did, but in the last 6 months the number of Rust users in the world has escalated and it is expected to climb much higher. An upgrade to C and C++, Rust is becoming more beloved by programmers every day.  

5. Julia

Delivering Hadoop style parallelism, Julia’s stock in the tech industry is rising. The Julia Language is highlighted as one that is destined to make a major impact in the future. Described as a high level, high performance, dynamic programming language for technical computing, Julia is making a niche of its own in the world of programming languages.  

6. Scala

The Scala Programming Language has been on the market for a little longer than most of the other languages in this list and was probably a little slow to get off the blocks as compared to the other langua7 New Programming Languages To Learn in 2016 ges. However; this functional and highly scalable programming languages has gradually attracted attention and companies such as Twitter, LinkedIn and Intel are using the language in their system now.  

7. Dart

Given that Google Go has garnered such unprecedented success, the other language from Google – Google Dart – has been in its shadows for the past 7-8 months. However, now that app development is gaining pace, people are realising how useful Dart can be in implementing high performance architecture and performing modern app development. Unveiled as a substitute for Javascript for browser apps, Dart is finally realising its true potential and is expected to continue its rise in the coming years.  

source:-http://www.codingdojo.com/blog/new-programming-languages-to-learn-2016/

In a modern, multicore chip, every core—or processor—has its own small memory cache, where it stores frequently used data. But the chip also has a larger, shared cache, which all the cores can access.

If one core tries to update data in the shared cache, other cores working on the same data need to know. So the shared cache keeps a directory of which cores have copies of which data.

That directory takes up a significant chunk of memory: In a 64-core chip, it might be 12 percent of the shared cache. And that percentage will only increase with the core count. Envisioned chips with 128, 256, or even 1,000 cores will need a more efficient way of maintaining cache coherence.

At the International Conference on Parallel Architectures and Compilation Techniques in October, MIT researchers unveil the first fundamentally new approach to cache coherence in more than three decades. Whereas with existing techniques, the directory's memory allotment increases in direct proportion to the number of cores, with the new approach, it increases according to the logarithm of the number of cores.

In a 128-core chip, that means that the new technique would require only one-third as much memory as its predecessor. With Intel set to release a 72-core high-performance chip in the near future, that's a more than hypothetical advantage. But with a 256-core chip, the space savings rises to 80 percent, and with a 1,000-core chip, 96 percent.

When multiple cores are simply reading data stored at the same location, there's no problem. Conflicts arise only when one of the cores needs to update the shared data. With a directory system, the chip looks up which cores are working on that data and sends them messages invalidating their locally stored copies of it.

"Directories guarantee that when a write happens, no stale copies of the data exist," says Xiangyao Yu, an MIT graduate student in electrical engineering and computer science and first author on the new paper. "After this write happens, no read to the previous version should happen. So this write is ordered after all the previous reads in physical-time order."

Wednesday, 13 July 2016 04:54

Importance of Software Engineering in modern era

Written by

Software engineering is the study and application of engineering to the design, development, and maintenance of software. Typical formal definitions of software engineering are: “the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software. Software engineering is relatively a new area of engineering though, but the scope of software engineering is extremely broad. Being one of the prominent branches of the field of Engineering, it’s growing among the fastest fields in the world today.

It must be noted that the term software development can be used for every type of software development whether it’s as simple as visual basic for applications Modules for Microsoft Word, Excel or Access or developing large, expensive and complicated applications for businesses or creating software for gaming entertainment.

Software engineers are the computer programming professionals. It’s worth mentioning that a software engineer is also a programmer, as he writes codes, but a programmer may not be called a software engineer, because in the former case, one needs to have a formal education. Besides, a software engineer is the one who follows a systematic process that leads to understanding the requirements, working with teams and various professionals in order to create the application software or components or modules that fulfill the specific needs of the users successfully; whereas a computer programmer can work independently, as he understands algorithms and knows how to create codes following the specifications given by the software engineers. However, software engineering is a vast field. It is not just limited to computer programming, but it’s much more than computer programming. It covers a wide range of professions from business to graphic designs or video game development.

Not just in a specific field, but every field of work, specific software is needed. Since the software is developed and embedded in the machines in order that it could meet with all intents and purposes of the users belonging to various professions, software engineering is of great application and assistance. Not only the field of software engineering involves using some common computer languages, such as, C, C++, Java, python and Visual Basic in an appropriate manner that the intended results may be attained, but it also leads to apply the concepts in such a way that the development of the software may be made effectively and efficiently.

Software engineers or developers are the creative minds behind computers or programs. Some develop the application software for clients and companies analyzing the needs of the users. Some develop the system software used to run the devices and to control the networks. Whatever be the nature of work, software engineering is one of highest-paid fields in this modern day and age. It’s an up-and-coming field, as it’s believed that it’s likely to grow much faster than the average compared to other professions. If you have strong problem solving skills, an eye for details and good understanding at mathematical functions, then you may consider this lucrative field of study that could give you various benefits including higher level of job satisfaction recompensing your creative efforts

source:-http://fareedsiddiqui.com/

Mice, and now touchscreens, have become a daily part of our lives in the way we interact with computers. But what about people who lack the ability to use a mouse or touchscreen? Or situations where these would be impractical or outright dangerous?

Many researchers have explored eye-gaze tracking as a potential control mechanism. These tracking mechanisms have become sophisticated and small enough that they currently feature in devices such as smartphones and tablets. But on their own, these mechanisms may not offer the precision and speed needed to perform complex computing tasks.

Now, a team of researchers at the Department of Engineering has developed a computer control interface that uses a combination of eye-gaze tracking and other inputs. The team's research was published in a paper, 'Multimodal Intelligent Eye-Gaze Tracking System', in the International Journal of Human-Computer Interaction.

Dr Pradipta Biswas, Senior Research Associate in the Department's Engineering Design Centre, and the other researchers provided two major enhancements to a standalone gaze-tracking system. First, sophisticated software interprets factors such as velocity, acceleration and bearing to provide a prediction of the user's intended target. Next, a second mode of input is employed, such as a joystick.

"We hope that our eye-gaze tracking system can be used as an assistive technology for people with severe mobility impairment," Pradipta said. "We are also exploring the potential applications in military aviation and automotive environments where operators' hands are engaged with controlling an aircraft or vehicle."

The selection problem

One challenge that arises when designing such a system is that once the target is selected, how does the user indicate a desire for selection? On a typical personal computer, this is accomplished with a click of the mouse; with a phone or tablet, a tap on the screen.

Basic eye-gaze tracking systems often use a signal such as blinking the eyes to indicate this choice. However, blinking is not often ideal. For example, in combat situations, pilots' eyes might dry up, precluding their ability to blink at the right time.

Pradipta's team experimented with several ways to solve the selection problem, including manipulating joystick axes, enlarging predicted targets, and using a spoken keyword such as 'fire' to indicate a target.

Unsurprisingly, they found that a mouse remains the fastest and least-cognitively stressful method of selecting a target – possibly assisted by the fact that most computer users are already comfortable with this technique. But, a multimodal approach combining eye-gaze tracking, predictive modelling, and a joystick can almost match a mouse in terms of accuracy and cognitive load. Further, when testing computer novices and with sufficient training in the system, the intelligent multimodal approach can even be faster.

The hope is that these revelations will lead to systems that perform as well – or better – than a mouse. "I am very excited for the prospects of this research," Pradipta said. "When clicking a mouse isn't possible for everyone, we need something else that's just as good.

source:-http://phys.org/news/2015-04-techniques-eye-gaze-tracking-interaction.html#nRlv

 

Displays that can be folded and rolled up have been shown in prototype smartphones, wearables, and other devices -- but when will such products be available?

Advances in technology suggest they aren't too far off in the future. Such devices could start showing up as early as next year or 2018, said Jerry Kang, senior principal analyst for emerging display technologies and OLED at IHS.

Manufacturers are trying to launch them in devices like tablets that can fold into a smartphone-size device. It's possible to use these displays in wearable devices, but reliability, weight and battery life need to be considered, Kang said.

Small folding screens will likely come before larger ones, mainly due to the economics of making such displays, Kang said.

The displays will be based on OLED (organic light-emitting diode), which is considered a successor to current LED technology. OLEDs don't have lighting back-panels, making them thinner and more power efficient.

 At CES this year, LG showed a stunningly thin paper-like display that could roll up. The company projects it will deliver foldable OLEDs by next year.

There are advantages to screens that can be folded or rolled up. They could lead to innovative product designs and increase the mobility of devices, Kang said.

For example, it could be easier to fit screens around the contours of a battery and other components. It will also provide a level of flexibility in how a user can change the shape of a device.But challenges remain in making such screens practical, Kang said.

 The size of batteries and circuits are of lesser concern in designing bendable screens, Kang said. The screens can be folded around components. Displays that can fold and roll are an extension of flexible displays, which are already in wearables, smartphones and TVs. For example, some TVs have flexible screens that are designed so that they can be slightly curved.

Samsung and LG started using flexible AMOLED displays in smartphones in 2013 and are adapting those screens for wearables. Those companies are also leading the charge to bring displays that can bend and fold to devices. 

 The sorts of flexible displays that are used in curved products are still in their infancy, but IHS projects such screens to continue siphoning market share from non-flexible displays. In 2022, 433.3 million flexible displays will ship, compared to 3.6 billion units of non-flexible displays.

source:-http://www.infoworld.com/

Sometimes Windows needs a completely fresh start.

Sometimes Windows needs a fresh start—maybe a program’s gone awry or a file’s been corrupted. Luckily, Windows 10 lets you do this with a few clicks.

Windows 10 has an option where you can reinstall Windows and wipe your programs, but it keeps your files intact. Note that this won’t get rid of any “bonus” bloatware programs your PC vendor put on your computer before you bought it—you’ll have to do that manually—but it will get rid of any software you or someone else installed afterward.

Even though Windows says it’ll keep your files intact, it always pays to back up your PC or at least the important files before you do anything like this.

Ready? Okay. Hit the Start button and go to Settings. In Settings, selectUpdate and Security, and in there, select Recovery.

At the top of the Recovery section you’ll see Reset this PC. Click the Get Started button—don’t worry, you’ve still got one more step—and then you get to choose an option. In this case, we’re choosing Keep my files, and the dialog box reminds you that this will remove your apps and settings. Then you just sit back and let Windows do its thing. It may take a while. When it’s done, you should have a fresh Windows installation, and unless you’re very unlucky, your personal files will still be right where you left them.

source:-http://www.computerworld.in/

About Manomaya

Manomaya is a Total IT Solutions Provider. Manomaya Software Services is a leading software development company in India providing offshore Software Development Services and Solutions

From the Blog

05 July 2018
29 June 2018