$sample operator in MongoDB


If we want to select a certain number of documents randomly from the collection, then we have to use $ samples. A $ sample operator randomly selects a specified number of data from the input.

Syntax:


{ $sample: { size: <positive integer> } }


Note:
  1. If the size is greater than or equal to 5% of data (total documents), $sample scan the collection and sort the document.
  2. If the size is less than 5% of data (total documents) then $sample use the pseudo-random cursor on the collection if using WiredTiger Storage Engine.
  3. $sample use the _id to select random data if using MMAPv1 Storage Engine.
  4. $sample can select the same document more than once in its result set.
 
Example: Suppose we have a collection named Books and wants to select 3 documents randomly.


{ "_id" : 1, "Book Title": "book1", "price" : 20, "quantity" : 1, "date" : ISODate("2016-08-05T07:00:00Z") }
{ "_id" : 2, " Book Title " : " book2", "price" : 10, "quantity" : 2, "date" : ISODate("2016-08-05T08:00:00Z") }
{ "_id" : 3, " Book Title " : " book3", "price" : 30, "quantity" : 4, "date" : ISODate("2016-08-17T10:00:00Z") }
{ "_id" : 4, " Book Title " : " book4", "price" : 10, "quantity" : 2, "date" : ISODate("2014-09-01T11:20:39.736Z") }
{ "_id" : 5, " Book Title " : " book5", "price" : 20, "quantity" : 6, "date" : ISODate("2014-09-04T20:23:13.331Z") }


Run following query


db.Books.aggregate({ $sample: { size: 3 } })


Result:


{ "_id" : 5, " Book Title " : " book5", "price" : 20, "quantity" : 6, "date" : ISODate("2014-09-04T20:23:13.331Z") }
{ "_id" : 1, "Book Title" : "book1", "price" : 20, "quantity" : 1, "date" : ISODate("2016-08-05T07:00:00.000Z") }
{ "_id" : 3, " Book Title " : " book3", "price" : 30, "quantity" : 4, "date" : ISODate("2016-08-17T10:00:00.000Z") }

$out operator in MongoDB


$out is responsible for writing a specific collection taking documents returned by aggregation pipeline. We should keep $out at the last stage in the pipeline. $out operator gives the facility to return a result set of any size.

Syntax:


{ $out: "<output-collection>" }


When you use a $out operator it creates a new collection in the current database. It is just like a view in SQL Server but it creates such a table. If you want to aggregate some documents from the collection and want to write in a new collection then you have to use a $out operator in MongoDB. $out operator creates a new collection if aggregation succeeds if aggregation failed It will not create a collection.

$out operation replace the existing collection if already exists in the current database, mean if you are performing aggregation using $out operation and collection already exists in a database which is specified by $out, it replaces the existing collection with $out specify collection. If operation filed then no changes made in the database. If the previous collection has any index it will not change when replacing with new.

If there are not any unique field in $out collection then the operation will be failed, you have to use _id including index in the output collection.

Example:


{_id:1,Name:"Dilip",Project:"Pro1"}

{_id:2,Name:"Dilip",Project:"Pro2"}

{_id:3,Name:"Vipul",Project:"Pro3"}

{_id:4,Name:"Vipul",Project:"Pro4"}

{_id:5,Name:"Vipul",Project:"Pro5"}

{_id:6,Name:"Dilip",Project:"Pro6"}

{_id:7,Name:"Pankaj",Project:"Pro7"}


The above collection (EMP) created for employees and the projects. Suppose we have to create a new collection where to summarize employee’s projects.
 Run the following query.


db.EMP.aggregate( [

               { $group : { _id : "$Name", Projects: { $push: "$Project" } } },

               { $out : "EmpProject" }

            ] ) 


It will create a new collection in the current database. New the collection will create with the name “EmpProject” look like below.


 { "_id" : "Pankaj", "Projects" : [ "Pro7" ] }

 { "_id" : "Vipul", "Projects" : [ "Pro3", "Pro4", "Pro5" ] }

 { "_id" : "Dilip", "Projects" : [ "Pro1", "Pro2", "Pro6" ] }


Introduction to SQL Server 2016


Chapter 1

SQL Server 2016 is the latest edition of the Microsoft SQL Server Database. SQL Server 2016 came with a variety of new features and enhancements that deliver breakthrough performance, advanced security, integrated reporting, and capabilities. Many features have been incorporated in SQL Server 2016 which introduced first in the cloud in Microsoft Azure SQL Database.  SQL Server includes the capability to dynamically migrate historical data to the cloud.

Some Enchantment and features are given below.

Faster Queries:

In SQL Server faster queries were the major problem, a user wants to data access more as you can give him. 
  1. SQL Server 2016 gives several options for enabling faster queries. Memory- optimization table now supports even faster online transaction processing (OLTP) workloads, with better throughput as a result of new parallelized operations.
  2. For analytic workloads, you can take advantage of updateable, clustered column store indexes on memory-optimized tables to achieve queries that are up to one hundred times faster.
  3. Not only is the database engine better and faster in SQL Server 2016, but enhancements to the Analysis Services engine also deliver faster performance for both multidimensional and tabular models.
Some features and enhancements have given following.
  1. In-Memory OLTP enhancements
  2. T-SQL enhancements
  3. In-memory analytics
  4. Analysis of Services enhancements
In-Memory OLTP enhancements

In-Memory OLTP introduced in SQL Server 2014. 
  1. In-Memory OLTP accelerate transactional workload high concurrency and too many latches by moving data from disk-based table to memory optimize table.
  2. In-memory OLTP can also help improve the performance of data warehouse staging by using nondurable, memory-optimized tables as staging tables. Although there were many good reasons to use memory-optimized tables in the first release of In-Memory OLTP, several limitations restricted the number of use cases for which In-memory OLTP was suitable.
New features for memory-optimized table:
In SQL Server 2016, you can implement the following features in memory-optimization table.
  1. FOREIGN KEY constraints between memory-optimized tables, as long as the foreign key references a primary key.
  2. CHECK constraints.
  3. UNIQUE constraints.
  4. Triggers (AFTER) for INSERT/UPDATE/DELETE operations, as long as you use WITH NATIVE_COMPILATION.
  5. Columns with large object (LOB) types—varchar(max), nvarchar(max), and varbinary(max).
  6. Collation using any code page supported by SQL Server.
  7. Indexes for memory-optimized tables now support the following features:
  8. UNIQUE indexes.
  9. Index keys with character columns using any SQL Server collation.
  10. NULLable index key columns. 
Scaling memory-optimized tables: 
You can put more data in memory-optimized table and also provide better flow capacity to support bigger workload. You can store up to 2 terabytes (TB) of data in memory-optimized table when you create a durable table by using the SCHEMA_AND_DATA option in the CREATE TABLE statement. You can create a memory-optimized table with a row size greater than 8,060 bytes, even without a large object ( LOB ) column. 
Example:

CREATE TABLE
Books
(
            bookID INT IDENTITY PRIMARY KEY NONCLUSTERED,
            ISBN NVARCHAR(4000),
            Title NVARCHAR(4000),
            CreatedDate DATETIME,
            Price DECIMAL(18,2),
            Qty INT,
)
WITH (MEMORY_OPTIMIZED = ON);

 Introducing native client enhancements:
SQL Server 2016 gives facility to enable Multiple Active Result Sets(MARS) when connecting to a memory-optimized table or running a natively compiled stored procedure. This way, the database engine can begin fetching rows from a new result set before completely retrieving rows from an earlier request.

Note : Before use of MARS, you must enable it explicitly in the connection string, like this:

Data Source=<Your Server>; Initial Catalog=<Your Database>; Integrated Security=SSPI; MultipleActiveResultSets=True
  
In many part MARS support to memory-optimized tables physical table but some differences given below.
  1. If two statement applied to modify one row, a write conflicts occurs and operation fails.
  2. As we know each statement runs under SHANPSHOT iteration and 'is in batched-scoped transaction, the result of an operation performed by one statement is not visible to another statement. However, rollback of one batch-scoped transaction does affect other transactions in the same batch.
  3. A user transaction cannot perform DDL Operation. 
 Exploring T-SQL enhancement :

Alter table support more functionality so that you can make changes to existing memory-optimized tables without needing to first drop table.
Alter Table Statement:

You can use the ALTER TABLE statement to change the table definition and indexes on memory-optimized tables. 
Example:

-- Add a column and an index
ALTER TABLE dbo.books ADD Qty INT NULL,
INDEX ix_CreatedDate(CreatedDate);
-- Alter an index by changing the bucket count
ALTER TABLE dbo.Books
ALTER INDEX ix_UserId REBUILD WITH
( BUCKET_COUNT = 2000000);
-- Drop an index
ALTER TABLE dbo.Books DROP INDEX ix_CreatedDate;



In most cases, the ALTER TABLE statement runs in parallel and writes only metadata changes to the transaction log as a performance optimization. However, the following single-threaded operations continue to require writing the entire altered table to the transaction log:
  1. Adding or altering a column with a LOB data type.
  2. Adding or dropping a columnstore index.
  3. Any operation affecting an off-row column—a column with a data type of char, nchar, varchar, nvarchar, binary, or varbinary that does not fit in the 8,060-byte row—except if the operation lengthens a column that is already off-row. 
 Natively compiled modules:
Tables and store procedures compiled natively introduced in SQL Server 2014 which enables more efficient query execution and data access for memory-optimized tables. New ability in SQL Server 2016 is the compile other objects like scalar user-defined functions (UDFs), inline table-valued functions (TVFs), and triggers. In addition to the support of new objects, the query surface area for native compilation is expanded and applies not only to natively compiled stored procedures but to all natively compiled modules. You can also alter a natively compiled module rather than drop and re-create it when a change to its functionality is necessary.
Much like you do when creating a natively compiled stored procedure, you must include the NATIVE_COMPILATION and SCHEMABINDING options in the WITH clause when you create a natively compiled scalar UDF. See following example. 

Using LOBs in a natively compiled scalar UDF
CREATE FUNCTION dbo.TRIM_CONCAT
(
            @str1 NVARCHAR(MAX),
            @str2 NVARCHAR(MAX)
)
RETURNS NVARCHAR(MAX) WITH NATIVE_COMPILATION, SCHEMABINDING AS
BEGIN ATOMIC WITH
(
TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'English'
)
RETURN RTRIM(@str1) + LTRIM(@str2);
END;



SQL Server 2016 also supports the following capabilities for natively compiled modules:
  1. Use the OUTPUT clause with the INSERT, UPDATE, and DELETE statements in a natively compiled stored procedure.
  2. Omit the EXECUTE AS clause when you want EXECUTE AS CALLER as the execution context of a natively compiled module. You include an EXECUTE AS clause in a module only if you need to enforce a different execution context.
  3. Use any SQL Server collation in an expression.
  4. Use any of the built-in math and security functions.
  5. Use SELECT DISTINCT in the SELECT clause.
  6. Use OUTER JOIN, EXISTS, IN, and scalar sub-queries in the FROM clause.
  7. Use OR or NOT in the WHERE clause.
  8. Combine result sets by using the UNION or UNION ALL operations.
  9. Configure nested execution of natively compiled modules by using the EXECUTE statement. 
Managing memory-optimized tables: 
We are aware with STATITICS, SQL Server 2016 give facility to update automatically but you have to set database capability 130. Statistics are updated by data sampling rather than by a full scan. If you are creating memory-optimized table in SQL Server 2014 or SQL Server 2016 then run the following script to update STATITICS automatically for tables. 

-- Enable automatic update on the database first, if necessary
ALTER DATABASE CURRENT SET AUTO_UPDATE_STATISTICS ON; GO
-- Set the compatibility level, if necessary
ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 130; GO
DECLARE @sql NVARCHAR(MAX) = N'';
SELECT @sql += N'UPDATE STATISTICS '
+ quotename(schema_name(t.schema_id))
+ N'.' + quotename(t.name)
+ ';' + CHAR(13) + CHAR(10)
FROM sys.tables AS t
WHERE t.is_memory_optimized = 1;
EXECUTE sp_executesql @sql;


  

Planning data migration to memory-optimized tables: 
SQL Server 2016 includes several tools for helping you plan a migration of data in existing tables to In-Memory OLTP. Reports and tasks in SQL Server Management Studio (SSMS) help you evaluate workloads and create checklists of migration steps. You can also use SQL Server PowerShell or Windows PowerShell to evaluate candidate objects for migration. 
Analysis Services enhancements:

There are two types of modes in Analysis Service to support fast reporting and analytics. 
  1. Multidimensional
  2. Tabular
Understanding multidimensional performance improvements: 
The performance of queries includes an unnatural hierarchy which is faster than its previous version. Although natural hierarchy remains faster yet. Storage structure of an unnatural hierarchy is unchanged. Instead, the engine is now able to execute the query in a more optimal fashion when it encounters a drill-down query. 
Example: 

--Drilling down one level of a hierarchy
SELECT NON EMPTY Hierarchize(DrilldownMember({{DrilldownLevel({[Product].[Color-Size].[All]},,, INCLUDE_CALC_MEMBERS)}},
{[Product].[Color-Size].[Color].&[White]},,,INCLUDE_CALC_MEMBERS))
DIMENSION PROPERTIES PARENT_UNIQUE_NAME,HIERARCHY_UNIQUE_NAME ON COLUMNS
FROM [Adventure Works]
WHERE ([Measures].[Reseller Sales Amount]) CELL PROPERTIES VALUE, FORMAT_STRING,
LANGUAGE, BACK_COLOR, FORE_COLOR, FONT_FLAGS



Understanding tabular performance improvements: 
Tabular-model performance has always been good because of its use of the same columnar technology as columnstore indexes and in-memory OLTP. However, it’s even better in SQL Server 2016 because of DAX optimizations, storage-engine caching improvements, and changes to the way that T-SQL is generated for DirectQuery mode.

The first DAX optimization is a reduction in the number of queries sent to the storage engine from Power BI and Power BI Desktop. Prior to this change, a single chart could potentially send hundreds of queries to the storage engine. Now the Power BI client tools generate a single storage-engine query, whether the storage engine retrieves data from memory or from SQL Server in DirectQuery mode—as long as the query includes only simple measures. Furthermore, a single DAX query can return multiple result sets as intermediate results that Power BI can use across multiple requests.

The next optimization executes queries more efficiently in any client tool, including Microsoft Excel or SQL Server Reporting Services (SSRS) in addition to the Power BI client tools. This efficiency is a result of changes in the following areas that affect measure execution:
  1. Variables As we describe in more detail in Chapter 6, “More analytics,” the use of variables in a DAX expression for measures allows you to reuse logic within the same expression, which can reduce overall execution time.
  2. Conditional expressions Rather than evaluating each branch of an IF or SWITCH conditional expression, a branch with a false condition no longer generates a storage-engine query. 
In-memory analytics

SQL Server 2016 adds more options for performing real-time analytics on rapidly changing in-memory data by adding support for columnstore indexes on memory-optimized OLTP tables.

Reviewing columnstore index enhancements:

1.    An enhancement to clustered columnstore indexes in SQL Server 2016 is the ability to have a nonclustered B-tree index on a table that also has a clustered columnstore index.
2.    Improvement in batch-execution mode, a feature of columnstore indexes that allows results to be processed in batches of approximately 1,000 rows rather than row by row.
3.    SQL Server 2016 also includes support for columnstore indexes in readable secondary databases in Always On Availability Groups.

How to create columnstore index in a disk-based table


CREATE CLUSTERED COLUMNSTORE INDEX ix_cci_Product ON ProductBig WITH (DROP_EXISTING=ON);


DROP_EXISTING=ON when you used this option it drop the column before and create a new one.

Create a non-cluster Columnstore index with filtered condition


CREATE NONCLUSTERED COLUMNSTORE INDEX ix_salesorderheaders_ncci
ON Sales.Order (CustomerID, ISBN, QTY, Price, ShippingDate)
WHERE ShippingDate < '2016-01-01';


Difference between IS and AS keyword in C#


IS keyword is responsible for checks whether  an object is compatible with a given type and the result of the evaluation is a Boolean (true, false).
For example:
        public void emp()
        {
            Employee oEmp = new Employee();
            if (oEmp is Employee)
            {
            // your code
            }
        }

AS keyword is responsible for the casting of the object to a given type or class.
for Example :

Employee o = oEmp as Employee;

HTML I ELEMENT


We often saw the italic text in books or magazines. It doesn't mean that <i> tag is taking any extra important. It simply represents the text like technical terms, foreign language phrases, or fictional character thoughts.

HTML <i> element represent the italic text

Example


<p>Use these audits to review policies and procedures on:</p>
<i>Pay and rewards </i>
<p><i> Training </i></p>
<p><i> Employee development</i></p>
<p><i> People management </i></p>
<p><i>This is a book name "Dilip Singh"</i></p>


Result will be shown below:


Use these audits to review policies and procedures on:
Pay and rewards
Training
Employee development
People management
This is a book name "Dilip Singh"


Usage Note:
  • HTML <I> element is displayed in italic text size. However, it is not a good way to use the italic text we can use CSS styling for this purpose.
  • <I> tag  or <em> tag look alike but the semantic meaning is different. <I> element is only a presentation element.
  • When we want that text should be italicized but not have any real semantic meaning then we use this tag <I></I>
  • HTML <i> element  is represent the name of a book or movie, it also represents a foreign word.
  • HTML <i> element  tag represents text.

Related Posts

What is the Use of isNaN Function in JavaScript? A Comprehensive Explanation for Effective Input Validation

In the world of JavaScript, input validation is a critical aspect of ensuring that user-provided data is processed correctly. One indispensa...