In order to develop .NET applications with Visual Studio in an IIS 7 environment you need to complete several steps.
- Install IIS 7 on the Web Server, then the Development pc
- Add the .NET and ASP role services on the Web Server and Development pc
- Add the IIS 6.0 Metabase Compatability on the Web Server and Development pc
- Install Visual Studio on the Web Server and Development pc
- Install the Visual Studio debugger on the Web Server and Development pc
Sounds easy, doesn’t it? This document will help.
A service principal name (SPN) is the name by which a Kerberos client uniquely identifies an instance of a service for a given Kerberos target computer. Read more …
How can I delete records from a table when there are foreign keys that depend on these records? Read more at Stackoverflow …
Entity–attribute–value model (EAV) is a data model to describe entities where the number of attributes (properties, parameters) that can be used to describe them is potentially vast, but the number that will actually apply to a given entity is relatively modest. In mathematics, this model is known as a sparse matrix. EAV is also known as object–attribute–value model, vertical database model and open schema. Read more at Wikipedia …
Suppose I have a query where I want everything in table A that is not in table B. For example, I want all the customers in table A who have not placed an order in table B. There are a surprising number of ways to build a select statement that returns the correct answer. Some of those queries may also give you an incorrect answer if one of the target fields has nullable data.
Aaron Bertrand’s article will walk you through each of the examples to show you why Not Exists is the best option for this type of query.
When using BIDS to create an SSIS package in a 64-bit environment it was easy to create the connection to a 32-bit Access Database. You can view and select columns from the database, modify them in the data flow, then map them to the destination table.
But when you execute the package you get an error 0xC0209303 which says the Access DB couldn’t be opened.
The quick fix: Under the Project menu pick the project Properties (bottom choice), pick Debugging under Configuration Properties, and set the Run64BitRuntime property to false.
Big Data is a buzzword that few people can agree on. So if someone starts spewing on about Big Data, ask them “What is your definition of Big Data?”
From a technical standpoint, Big Data is an extension of data warehousing and business intelligence. Data warehousing has always used large data sets, and data mining has always been an advanced analytical method for business intelligence. These technologies have been around since the 1990’s and are well-developed and mature.
So, what is Big Data, truly? Early adopters like Yahoo were trying to apply data warehousing techniques to Social Media data sets, which turned out to be considerably larger than most previous data sets. They solved their storage issues by mimicing Google’s approach using large server farms to store the data. There is now a packaged approach for this called Hadoop. Hadoop allows you to manage the servers, while the data is processed with Google’s Map-Reduce technique.
Because this Social Media was difficult to store in a traditional relational database (RDBMS), early adopters turned to other alternatives. These databases are now typically called NoSQL databases, but this term is vague and useless. The types of databases are many and varied, so find out the name and type of your database. Always use this in a discussion to avoid confusion.
The type of analysis done on this data is typical of business intelligence: basic reporting, probability, statistics, and data mining. Although these techniques are not new, the labor force for them is scarce. Big Data projects often require analysts with advanced skill sets along with additional creative skills to work outside the box.
Who is using Big Data? Primarily the retail and telco sectors, with some new adopters in the financial and health sectors.
In summary, Big Data uses a tool like Hadoop to store and process data, uses a non-traditional database like MangoDB to give structure the data, and uses advanced analytical techniques like data mining to make sense of the data.
In a finding that overturns the conventional view that large old trees are unproductive, scientists have determined that for most species, the biggest trees increase their growth rates and sequester more carbon as they age. Read more…
The 100 Greatest Science Books list contains a mixture of classic and popular works, chosen for their accessibility and relevance. Most of the books selected are suitable for a well educated layman with only a few being for a more serious reader. The list covers the obvious subjects: biology, chemistry, and physics, as well as mathematics, the philosophy of science, and the history of science. It also includes several biographies.