BREAKING

Friday, August 24, 2018

On the Dismissal from the Service of Two DOTr-CAR Officials


Wazzup Pilipinas!

The Department of Transportation (DOTr) yesterday served the order to dismiss from the service Atty. Jesus Eduardo Natividad, Regional Director in the Cordillera Administrative Region (DOTr-CAR), and Datu Mohammad Abbas, Assistant Regional Director of the DOTr-CAR, after being found guilty of administrative offenses and corruption charges.

Unfortunately, Mr. Abbas refuses to obey the order, saying only President Duterte can terminate him because he is a presidential appointee. 

For the record, Mr. Abbas does not hold any appointment paper from President Duterte. He was appointed by the previous administration, and is serving the DOTr-CAR on a hold-over and acting capacity. A replacement has already been appointed.

Mr. Abbas was dismissed after several stakeholders in Baguio City and the Cordillera region alleged him of misconduct. Among the charges include accepting money to facilitate the issuance of Certificate of Public Convenience (CPC), extortion in exchange for the release of impounded vehicles, accepting commissions from medical and notarial fees relative to franchise applications, as well as Conduct Prejudicial to the Best Interest of the Service for brandishing and toting a gun against a certain clinic employee.

After painstaking scrutiny and careful evaluation of the statements and affidavits, the DOTr found Mr. Abbas guilty of administrative offense of Grave Misconduct and Conduct Prejudicial to the Best Interest of the Service with the penalty of dismissal from the service.

Meanwhile, Atty. Natividad was found guilty of Gross Neglect of Duty after he issued a CPC for a U-Hop application within the moratorium period.

The dismissal order also comes with accessory penalties of cancellation of eligibility, forfeiture of retirement benefits, and perpetual disqualification for re-employment in the government services.

Why Your SEO Strategy Must Recognize The Impact Of Machine Learning On Google


Wazzup Pilipinas!

With Google pouring in billions of dollars in machine learning and artificial intelligence, it is pretty evident that the SEO landscape is set to undergo a radical transformation very soon. Some areas that will undergo major upheavals:

Content
Even though content has already assumed great importance in SEO, AI is expected to become its driving force soon. It is expected that other on-page activities like keywords, technical issues, backlinks, etc. will take a backseat. Things like keywords, content structure, design, graphics, linking, etc. that partially affect SEO now will assume greater importance with AI being able to assess the quality of a webpage instantaneously. SEO will need to verify that the content is relevant and of high quality and that it satisfies the intent of the user while providing an experience that is positive. Ultimately, the content should drive users towards conversion. With data being machine processed, failure will invariably tank the page rankings immediately.



On-page Optimization and Backlinks
Title tags, URLs, Meta descriptions, alt texts, etc. while continuing to be important will no longer drive SEO as it does today. 

Plumber SEO services, which includes editing and creating website content and code on your plumbing company’s website so each of the web pages rank on page one of search engines like Google, Bing, and Yahoo.

AI algorithms will easily be able to identify strategies that fail to add value to user experience, design, and content; forcing SEO practitioners to monitor and enhance their efforts. AI will put an end to black hat SEO techniques and also diminish the importance of organic links that drive contemporary SEO. With Google’s AI expertise being able to understand the page contents, assess its value, and also analyze the relevance and value of links, search rankings will no longer be manipulated, according to experts .

Technical SEO
Technical knowledge and skills will become far less important in SEO with Google being able to diagnose and auto-correct technical errors on your website. Today Google is encouraging the adoption of technologies like Alexa, Google Home, and Amazon Echo simply because it wants to access sites for analyzing and ranking them; not having the initial data, the AI prowess is left useless.



User Experience
Unlike many technical aspects of SEO, UX quality will continue to grow in importance and be a key ranking factor. Today, if you want to know whether users have been enamored by a specific webpage, the best indicators are user engagement and the click-through rate. However, with time Google is expected to refine its AI capabilities to such an extent that it would be able to understand the level of user satisfaction just like a human and be able to rank the site accordingly. All those involved with the website, not only the SEO team but also digital marketers, designers, developers, etc. will need to get their act together to ensure that there are no flat spots in the UX.

Conclusion
The AI implementation will make Google even smarter and lend it the power to assess the true value and utility of websites very much like a human being but without the mistakes and biases. The technology will permit instantaneous assessment of whether the user’s intent has been met and a page rank allocated.

Remote DBA Expert Understands The Importance Of Choosing Database For Analytics


Wazzup Pilipinas!

Whenever the analytics questions run into edges of latest tools, it is time to head towards database for analytics. It is not a clever idea to construct scripts to query production database. Chances are high that you might delete vital info accidentally if you have engineers or analysts poking around over there. At that point, you need separate form of database for analysis. Which is the right one to choose?

While working with lots of customers to get DB up, the most interesting criteria are the amount of data you possess, type of data to analyze, focus of your engineering team and how fat you need it.



Types of data for analyzing:
Check out for the data you are planning to analyze. Will it fit well into columns and rows or would it be more appropriate into Word Doc? When it comes to Excel, relational database like MySQL, Postgres, Amazon Redshift or even BigQuery might be the ones to consider.

These are relational structured databases, which are amazing when you are aware of the data to receive and how it might links together. It basically works on ways in which rows and columns relate.

For major user analysis, relational database will be the perfect one for you. Some user traits like emails, names and billing plans will get nicely into table just like user events and the significant properties.

In case, the data fits better on paper, you might have to check for non-relational database or NoSQL like Mongo or Hadoop.

The non-relational ones work well with larger data points of semi-structured plans. Some classic examples of semi-structured data are social media, email, books, audio and visual aids, and even geographical data.

In case you are working on larger amount of text mining, image or language processing, you may have to visit non-relational data stores. You can get information on that from RemoteDBA.com now.

The amount of data to work with:

You need to know the amount of data you are planning to get hands on. The more amount you have, the non-relational database will be the most helpful option for that. NoSQL is not going to impose any kind of restraints on incoming data, which will definitely allow you to write a lot faster.

For <1TB, you can use Postgres MySQL. Then you have Amazon Aurora for 2TB – 64TB. Amazon Redshift and GoogleBigQuery are the major databases for 64TB to 2PB. And lastly you have Hadoop for covering all the data sizes as mentioned right now.

Well, there is no strict limitation as each database can handle an amount depending on multiple factors. For under 1TB data, Postgres helps in offering a good performance ratio price. But it might slow down around 6TB.

In case, you are looking for MySQL but with a bit more upscale, you have Aurora, which can go up to 64TB. In case of petabyte scale, Amazon Redshift is the best bet as it can be well-optimized for running analytics for up to 2PB.

In case, you are looking for parallel processing or more MOAR data, you can always try out for Hadoop.


The focal center of your engineering team:

Another important question asked by experts from RemoteDBA.com during database discussion is to check on the engineering team and its focus center. The smaller this team might get, the higher are your chances to need engineering focusing on building product and not on database management and pipelines. The number of people dedicated to work on your project will affect your options more.

With engineering resources, you have vast array of choices. You can either opt for non-relational or relational database. Relational DBs are likely to take less time for managing when compared to NoSQL.

If you have engineers working on setup but cannot find any help with the maintenance, you can focus on using Google SQL, Postgres or even Segment Warehouse any day over Aurora, Redshift or BigQuery.

When you have enough time in hand for maintenance, then you can select BigQuery or Redshift for faster queries at larger scale.

Relational database covers another advantage. SQL can be used for query. SQL is known among engineers and analysts and quite easier to learn when compared to other programing languages.

Moreover, running analytics on semi-structured data will require object oriented programming based background or a code based science background.

With some recent inventions of analytics tools like Hunk or Slamdata, analyzing such database types will always need help of an advanced data scientist or analyst.


How fast you are in need of data:
Even though real time analytics is quite popular nowadays for covering cases like system monitoring and fraud detection, moist analyses will not even need real time data or even immediate insights. Whenever you are answering questions like what causes users to churn or how people are currently moving from app to website, accessing data with slight lag is more or less ok. Your data won’t change every minute.

So, if you are planning to deal with after-the-fact analysis, you have to go for database as designed under hood for accommodating larger data amount and to read and join data quickly. That will make queries work faster than usual.

This step might also load data fast reasonably only if you have someone resizing, vacuuming and even monitoring the cluster.

In case you are direly in need of real-time data, you have to check for some unstructured database, such as Hadoop. You can easily design that database for loading rather quickly even when queries can take long time to scale, based on RAM usability, disk space availability and how the data has been structured.

Once you are sure of the right database to use, it is important to figure out ways to get you data involved with database. It is hard for newbies to actually build scalable data pipeline. That’s why experts are always down to offer some crucial help. It takes time to find the right team, but once available, their services are worth it.
Ang Pambansang Blog ng Pilipinas Wazzup Pilipinas and the Umalohokans. Ang Pambansang Blog ng Pilipinas celebrating 10th year of online presence
 
Copyright © 2013 Wazzup Pilipinas News and Events
Design by FBTemplates | BTT