GOTO Statement in SQL Server

After a condition/statement execution, if we need to alter the flow of execution to start from a particular label, then the GOTO statement will help you. Basically, GOTO keyword uses to skip statement process and continue from a particular label.
GOTO Statement in SQL Server contains two parts –

GOTO statement declaration: GOTO statement contains GOTO keywords and label_name as below.

GOTO label_name

Label Declaration: The label contains label_name only with at least one statement as below.

select * from tbl

Label_name must be unique within the scope of the code and after label_name declaration, there should be one SQL-statement.

Example: How can we use GOTO statement in SQL Server, you can see below.

DECLARE @rating INT=10
WHILE @rating>0
            IF @rating=8
                        GOTO Codefari
            SET @rating=@rating-1
            PRINT @rating


In above example, we have created one GOTO statement lable_name is Codefari if the @rating value is 8 then code will branch to the label called Codefari.

Transaction Logs in SQL server. Why we need and how can we read transaction log in SQL Server?

Problem: As we know, SQL server Master DB stores all the information of table, schema, stored procedures, function, triggers, etc. But what is about the operational information, if any problem like error or bug happened in our process execution or some intruder want to do nasty on your server like SQL injection. How can we track this type of issue in Our SQL Server?

Solution: Transaction log may help you to track this problem. A transition log may be the evidence for such type of issues.

Essential Operation captured by Transaction: Before an explanation about the transaction log, we should know what SQL operations are achieved by the transaction log.

Transaction log captured all the DML operation like INSERT, UPDATE, DELETE and also some DDL operation like CREATE, TRUNCATE and DROP,

Why to read the SQL Server transaction log?

The transaction log contains all the information about the transaction we performed in our database. If we did some operation by mistake like delete or we lost some data, and we want to recover lost data, then the transaction log may help you to resolve this problem. 

Way of reading the SQL Server transaction log.

We can’t read directly to the transaction log because of security reasons. SQL Server provides us some function to decode it. The function are fn_dblog() and fn_dump_dblog(). For the security purpose transaction log stores the information in an encrypted (not human-readable) format so we could not read it directly.

View SQL Server transaction log using fn_dblog() function:

The fn_dblog() is an undocumented function of SQL Server, which is allowed to view the transaction log records in the active part of the transaction log file for the current database.

Fn_dblog() the function takes two parameters, start LSN and end LSN.

Start LSN: Starting log sequence number, or LSN, you can also, specify null which means it will return everything from the start of the log.

End LSN: Ending the log sequence number, or LSN, you can specify null which means you want to return everything to end of the transaction log.

Note: Undocumented means, if you are using this function in SQL Server, which is against of production database instances executed on your own risk. I will explain only fn_dblog() function only in this article.

How to use fn_dblog() function in SQL Server?

If I will explain uses of fn_dblog() on the existing database it may create confusion so I am using a new database from creation step by step.

1-Step:  Create database ‘Codefari.’



2-Step: Create table ‘Blogs’.

USE [Codefari]
            ID INT IDENTITY(1,1) NOT NULL,
            TITLE VARCHAR(MAX)

3-Step: In this step, we will check how much process has been used by SQL Server to create a database and table.

USE [Codefari]
SELECT COUNT(*) FROM fn_dblog(null,null)

You can see 180 steps taken by SQL Server to complete this process.

4-Step: In this step, you can see what data available in the log.

USE [Codefari]

[Current LSN],
[Transaction Name], 
[Transaction ID],
[Page ID],
[Parent Transaction ID],
FROM fn_dblog(NULL, NULL)

5-Step: In this step, we will perform some DML operation and will track transaction information in Log.

INSERT INTO Codefari.[dbo].[Blogs] VALUES('Transaction log in Codefari')
UPDATE Codefari.[dbo].[Blogs] SET [TITLE]='Transaction log in' WHERE ID=1
DELETE FROM Codefari.[dbo].[Blogs] WHERE ID=1

6-Step: This step will show the transaction which we have performed in 5-step.

USE [Codefari]

[Current LSN],
[Transaction ID],
[Transaction Name],
[Page ID],
[Slot ID],
[Begin Time],
[End Time],
[Number of Locks],
[Lock Information]
FROM sys.fn_dblog(NULL,NULL)
WHERE Operation IN

We can see in above steps how fn_dblog() function shows the captured transaction logs and made the transactions in a human-readable format.

Above Experiment and points give consequences as fn_dblog() can be used when users want to view the SQL transaction log. Users can easily check the transaction log to see the activities like page splits or objects being drops etc. 

MongoDB 3.4: What’s New

In the age of digital transformation and disruption, your ability to thrive depends on how quickly you adapt to a constantly changing market environment. MongoDB 3.4 is the latest release of the industry’s fastest-growing database. It offers a significant evolution in capabilities and enhancements that enable you to address emerging opportunities and use cases:

Multi-model Done Right: Native graph computation, faceted navigation, rich real-time analytics and robust connectors for BI and Apache Spark brings additional multimodal database support right into MongoDB.

Mission-Critical Applications: Geo-distributed MongoDB zones, elastic clustering, tunable consistency, and enhanced security controls bring the state-of-the-art database technology to your most mission-critical applications.

Modernized Tooling: Enhanced DBA and DevOps tooling for schema management, fine-grained monitoring, and cloud-native integration allows engineering teams to ship applications faster, with less overhead and higher quality.

For more details click here

Join between two collections in MongoDB

$lookup command used to perform join between two collections. This is a new feature of MongoDB version 3.2

$lookup performs the left outer join between two collections in the same database. For each input document, $lookup add new array fields whose element matches the joined collection.


       from: <collection to join>,
       localField: <field from the input documents>,
       foreignField: <field from the documents of the "from" collection>,
       as: <output array field>

$lookup takes a document with the following fields
from: Its takes collection name in the same database to perform joined.
localField: It takes the document in $lookup to perform equally match with foreign field from the documents which are the field of collection where we are performing aggregation.
foreighFiled: It takes the field of “from” the collection.
as: This is new array field contains the matching documents from the “from” collection.
Example: Suppose we have a collection of the employee.

Collection EMP

{_id:5,Name:"Ashish",dep:"Syste Admin"}

Collection DEP

{_id:14,depName:"Syste Admin",Manager:"Mar5"}

We have two collections EMP and DEP and we want to know about employees and their managers run the following query.

          from: "DEP",
          localField: "dep",
          foreignField: "depName",
          as: "EMp_Manager"


{ "_id" : 1, "Name" : "Manish", "dep" : "HR", "EMp_Manager" : [ { "_id" : 10, "depName" : "HR", "Manager" : "Mar1" } ] } 
{ "_id" : 2, "Name" : "Raj", "dep" : "Salesforce", "EMp_Manager" : [ { "_id" : 11, "depName" : "Salesforce", "Manager" : "Mar2" } ] }
{ "_id" : 3, "Name" : "Dilip", "dep" : "Database", "EMp_Manager" : [ { "_id" : 12, "depName" : "Database", "Manager" : "Mar3" } ] }
{ "_id" : 4, "Name" : "Vipul", "dep" : "UI", "EMp_Manager" : [ { "_id" : 13, "depName" : "UI", "Manager" : "Mar4" } ] }
{ "_id" : 5, "Name" : "Ashish", "dep" : "Syste Admin", "EMp_Manager" : [ { "_id" : 14, "depName" : "Syste Admin", "Manager" : "Mar5" } ] }
{ "_id" : 7, "Name" : "Pankaj", "EMp_Manager" : [] }

PostgreSQL-Query: Sort result set by specific field values using ORDER BY Clause

Problem: Suppose we have a book_inventory table which has some columns such as id, isbn, title, author, publisher, publish_date, etc.. whe...