byte = 10 ^ 0
kilobyte = 10 ^ 3
megabyte = 10 ^ 6
gigabyte = 10 ^ 9
terabyte = 10 ^ 12
exabyte = 10 ^ 15
zettabyte = 10 ^ 18
yottabyte = 10 ^ 21
brontobyte = 10 ^ 24
geopbyte = 10 ^ 27
Programmer Collection
Programming Tips and Tricks
Thursday, November 20, 2014
Sunday, July 13, 2014
[MSSQL] Access Denied attaching a database when permissions are inherited
If you attempt to attach a database, and the mdf/ldf files are in a directory where your user account has privileges only through a group, you get an access denied error. If you grant the same privilege set directly to your user account, the attach succeeds.
Error: Operating system error 5: "5(Access is denied)". (Micorsoft SQL Server, Error: 5120)
Try following steps:
- Delete LDF file from folder where is saved your both files,
- Go to the [Management Studio] and try to attach the same database,
- When you come in window in which you select database to attach, you will see in second list its remarked that "LDF file is not founded",
- You just select LDF file and click on button [Remove],
- Click button [OK] and your database will be attached.
This happened because something of your privileges are saved in LDF file. For that reason you must to clear this information.
If it still doesn't solve then you can do the following steps:
- un-checking the [Read-only] attribute from the files and folders on this path,
- [Start] => [Run...] => type services.msc,
- Scroll through the list of services until you find 'SQL Server ...' => Right-click => [Properties] => Click on [Log On] tab,
- Copy the content of field [This Account] (e.g.: NT Service\MSSQL$SQLEXPRESS) => [Cancel],
- Right-click to MDF file => [Properties] => click on [Security] tab => [Edit] => [Add...],
- Fill with copied value from [This Account] in services => [OK],
- Checking the [Allow] for 'Full Control' => [Apply] => [OK].
Last steps if the ways above could not solve the problem:
- Right-click to [SQL Server Management Studio] then choose [Run as admnistrator]
- Try to attach the database. Once it is attached then you can access the database with normal way.
- Quit from [SQL Server Management Studio], then access [SQL Server
Management Studio] with usual authority (just click to the icon).
Source:
[MSSQL] SQL Server tablediff command line utility
Problem
One problem that DBAs often face is maintaining lookups tables across multiple servers or sites. These tables could either be replicated or manually updated, but in any case sometimes the data in these tables get out of synch. In a previous tip we talked about SQL Server comparison tools that allowed you to compare database objects or even the data itself between two different databases. These tools are great and definitely offer a lot of advantages, but there is a tool in SQL Server that might help as well.
Solution
SQL Server comes with many GUI and command line tools and one new tool that is available is the tablediff command. This utility allows you to compare the contents of one table against another table to identity if there are differences as well as create the T-SQL commands to get the tables back in synch.
The one problem with this command is that it is a command line tool, there is not GUI so writing the command is a bit tedious. You can create a batch file and execute the batch file, but still it would be nice if there were a GUI.
The command takes a few basic parameters to run.
- sourceserver
- sourceuser
- sourcepassword
- sourcedatabase
- sourcetable
- destinationserver
- destinationuser
- destinationpassword
- destinationdatabase
- destinationtable
Here is an example command that compares two tables Table1 and Table2 in the same database.
"C:\Program Files\Microsoft SQL Server\90\COM\tablediff.exe" -sourceserver server1 -sourceuser user1 -sourcepassword password1 -sourcedatabase test -sourcetable table1 -destinationserver server1 -destinationuser user2 -destinationpassword password2 -destinationdatabase test -destinationtable table2
When the command is run this is the output that you get:
Microsoft (R) SQL Server Replication Diff Tool
Copyright (C) 1988-2005 Microsoft Corporation. All rights reserved.
User-specified agent parameter values:
-sourceserver server1
Copyright (C) 1988-2005 Microsoft Corporation. All rights reserved.
User-specified agent parameter values:
-sourceserver server1
-sourceuser user1
-sourcepassword password1
-sourcedatabase test
-sourcetable table1
-destinationserver server2
-sourcedatabase test
-sourcetable table1
-destinationserver server2
-destinationuser user2
-destinationpassword password2
-destinationdatabase test
-destinationtable table2
Table [test].[dbo].[table1] on server1 and Table [test].[dbo].[table2] on server1 have 3 differences.
Err PersonID
Mismatch 1
Dest. Only 2
Src. Only 3
The requested operation took 0.4375 seconds.
-destinationdatabase test
-destinationtable table2
Table [test].[dbo].[table1] on server1 and Table [test].[dbo].[table2] on server1 have 3 differences.
Err PersonID
Mismatch 1
Dest. Only 2
Src. Only 3
The requested operation took 0.4375 seconds.
From this basic command we can see there are differences, but it is not very helpful as to what the problem is, so to make this more useful we can use the "-et" argument to see the differences in a table. The "et" parameter will create a table, in our case called "Difference", so we can see the differences in a table.
"C:\Program Files\Microsoft SQL Server\90\COM\tablediff.exe" -sourceserver server1 -sourceuser user1 -sourcepassword password1 -sourcedatabase test -sourcetable table1 -destinationserver server2 -sourceuser user2 -destination password2 -destinationdatabase test -destinationtable table2 -et Difference
When we query the table "Difference" that was created we see the following results. As you can see this is not any more helpful then the first run.
PersonId | MSdifftool_ErrorCode | MSdifftool_ErrorDescription |
1 | 0 | Mismatch |
2 | 1 | Dest. Only |
3 | 2 | Src. Only |
NULL | NULL | NULL |
Another option is to use the "-f" argument that will create a T-SQL script to synchronize the two tables.
"C:\Program Files\Microsoft SQL Server\90\COM\tablediff.exe" -sourceserver server1 -sourceuser user1 -sourcepassword password1 -sourcedatabase test -sourcetable table1 -destinationserver server2 -destinationuser user2 -destinationpassword password2 -destinationdatabase test -destinationtable table2 -et Difference -f c:\table1_differences.sql
This is the output we get from the file that is created "c:\table1_differences.sql"
-- Host: server1
-- Database: [test]
-- Table: [dbo].[table2]
UPDATE [dbo].[table2] SET [LastName]=NULL WHERE [PersonID] = 1
DELETE FROM [dbo].[table2] WHERE [PersonID] = 2
INSERT INTO [dbo].[table2] ([FirstName],[LastName],[PersonID]) VALUES ('Bob','Jones',3)
-- Database: [test]
-- Table: [dbo].[table2]
UPDATE [dbo].[table2] SET [LastName]=NULL WHERE [PersonID] = 1
DELETE FROM [dbo].[table2] WHERE [PersonID] = 2
INSERT INTO [dbo].[table2] ([FirstName],[LastName],[PersonID]) VALUES ('Bob','Jones',3)
From here we can see the exact differences as well as having a script that we can run against Table2 to make the contents identical to Table1.
Next Steps
- This is a brief introduction of this new tool and how it can be used, check out tablediff in books online for additional options
- Add this new tool to your SQL toolbox and deploy where you need to keep certain tables in synch.
- In order to run this command you need to have primary keys setup on both tables.
Source:
[MSSQL] Why Primary Keys can be important on large tables
An easy answer to this question, why primary keys can be important, is: because if you don't have a primary key / clustered index and your table does inserts, updates and deletes, SQL Server will fragment the table, and it will take up a lot of extra space. Now, are primary keys required? Are they always needed? Only your design and implementation can answer those questions for your situation.
If you don't have a primary key / clustered index then you have a heap table. Note that a primary key and a clustered index are not the same thing. The primary key has to be unique, but a clustered index can have duplicates. In general, if your table is doing inserts, update and deletes, it is a bad idea to have a table without a clustered index.
I inherited a database from a previous employee. This database was used as a staging area to load data into the live database. Records are inserted, processed, and then deleted after a period of time. The processes worked fine, so I didn't think much about reviewing the tables inside of this staging db. As time went on the database continued to grow, which didn't make sense since the number of records didn't match the growth I was seeing. When I took a closer look at the tables I realized none of the tables had primary keys or clustered indexes.
Let's take just a moment to talk about what it takes for a column to be a primary key. First the column needs to be set as not null. Second the column needs to be unique. So an identity column can be a great choice for a primary key. Here is a funny thing about those tables that didn't have a primary key: They already had an identity column.
So I altered each table and made the existing identity column the primary key / clustered index. The effect was the total size of the database dropped significantly. Here is an example of one table:
Count TableSize DataSpace IndexSpace UnusedSpace
1310149 3134048 KB 462464 KB 24 KB 2671560 KB
After adding a primary key:
Count TableSize DataSpace IndexSpace UnusedSpace
1310167 163976 KB 163536 KB 384 KB 56 KB
If you notice before there was a primary key / clustered index, the data space used and unused space is quite large. After a primary key was added the data space and the unused space dropped and the index space used didn't go up that much. I had a 2GBb database that reduced down to 1GB after adding a primary key / clustered index to the tables.
Now we get to the part of the article that might help you if you want to add a primary key to a table that doesn't have one. There are two easy ways you can make an existing column into a primary key. First you can right click the table in Management Studio and click Design.
Then select the identity column or the column(s) you want to be the primary key and click the key button.
You should see a little key next to the column(s) you selected and then you can click the save button.
Once the change saves you will have a primary key that is the clustered index.
If there are a lot of rows in the table you are trying to add a primary key to, you might get a time out error when you click the save button if you use the right click design table method. This leads to the next method, which is using T-SQL. You can write a simple alter table statement to add a primary key / clustered index.
ALTER TABLE dbo.tableName ADD PRIMARY KEY CLUSTERED (columnname)
You would replace dbo.tableName with your table and columnname with the column you want to make the primary key / clustered index.
Now if you don't have a good column you can use as a primary key, you will have to add a column. The simplest thing to do would be to add a primary key identity column to the table. The T-SQL would look something like this:
ALTER TABLE dbo.tableName ADD [id] INT IDENTITY(1,1) NOT NULL PRIMARY KEY
Note if you have more than two billion rows in your table you would want to use a bigint instead of an int. It is important to note that there are cases when the primary key should not be the clustered index since the table may benefit from other column(s) as the clustered index.
If you need help finding tables that are missing primary keys you can run this SQL:
SELECT s.name , t.name FROM sys.tables t JOIN sys.schemas s ON t.schema_id = s.schema_id WHERE t.type = 'U' AND NOT EXISTS ( SELECT k.name FROM sys.key_constraints k WHERE k.parent_object_id = t.object_id AND k.schema_id = s.schema_id AND k.type = 'PK' ) ORDER BY t.name
I hope this article has shown the importance of adding primary keys to large tables to reduce the overall size of the database.
[MSSQL] How to Create Linked Server for a MySQL database using SQL Server Management Studio
Creating a Linked Server in SSMS for a MySQL database
- Install MySQL Connector
- Download the MySQL ODBC driver from mysql.com
- Install MySQL ODBC driver on Server where SQL Server resides
=> Double Click Windows Installer file and follow directions.
- Create a DSN using the MySQL ODBC driver
- Windows [Start] => [Settings] => [Control Panel] => [Administrative Tools] => [Data Sources (ODBC)]
- Click on the [System DSN] tab
- Click [Add]
- Select the 'MySQL ODBC Driver'
- Click [Finish]
- On the [Login] Tab
- Type a descriptive name for your [DSN].
- Type the server name or IP Address into the [Server] text box.
- Type the username needed to connect to the MySQL database into the [User] text box.
- Type the password needed to connect to the MySQL database into the [Password] text box.
- Select the [Database] you'd like to start in.
- On the [Advance] Tab - Under Flags 1
- Check [Don't Optimize column width]
- Check [Return Matching Rows]
- Check [Allow Big Results]
- Check [Use Compressed protocol]
- Check [BIGINT columns to INT]
- Check [Safe]
- On the [Advance] Tab - Under Flags 2
- Check [Don't Prompt Upon Connect]
- Check [Ignore # in Table Name]
- On the [Advance] Tab - Under Flags 3
- Check [Return Table Names for SQLDescribeCol]
- Check [Disable Transactions]
- Now test your DSN by Clicking the [Test] button
- Create a Linked Server in SSMS for the MySQL database
- SSMS (SQL Server Management Studio)
- Expand [Server Objects]
- Right Click [Linked Servers]
- Select [New Linked Server]
- On the [General] Page:
- [Linked Server]: Type the Name for your Linked Server
- [Server Type]: Select Other Data Source
- [Provider]: Select Microsoft OLE DB Provider for ODBC Drivers
- [Product name]: Type MySQLDatabase
- [Data Source]: Type the name of the DSN you created
- On The [Security] Page:
- Map a login to the [Remote User] and provide the [Remote Users Password]
- Click [Add] under Local server login to remote server login mappings:
- Select a [Local Login] From the drop down box
- Type the name of the [Remote User]
- Type the password for the Remote User - Change the Properties of the Provider MSDASQL Expand Providers
- Right Click MSDASQL
- Select Properties
- Enable Nested queries
- Enable Level zero only (this one's the kicker)
- Enable Allow inprocess
- Enable Supports 'Like' operator
- Change settings in SQL Server Surface Area Configuration for Feature
- Enable OPENROWSET and OPENDATASOURCE support.
- Change settings in SQL Server Surface Area Configuration for Services and Connections
- Select Properties
- Enable Local and Remote connections via TCP/IP and named pipes
- Stop SQL Server and SQL Server Agent
- Start SQL Server and SQL Server Agent
[MSSQL] Deleting Large Number of Records
By Lynn Pettis, 2011/02/18 (first published: 2009/09/15)
Introduction
One of the frequent questions asked on forums is how to delete a large number of records from a table in SQL Server. The question can come from several directions. Sometimes it is the result of excess growth of the database's transaction log, or that the table must remain available to users during the delete process, or simple that the user is looking for an efficient means to delete records on a regular basis.With this, I'd like to present one method that can be used to delete records from a table that will allow the table to remain available to users and will help keep the transaction log from growing excessively. I will show how this can be done in SQL Server 2000 and SQL Server 2005/2008.
The Setup
The first thing we need to do, however, is setup the table from which we will be deleting a large number of records. Most requests for help in this area come from individuals that are asking how to delete several million rows of data from a multi-million row table. For this code we will create a table with 20 million rows of data, and we will delete approximately 10%, or around 2 million, of those rows. The test data that will be generated below will have transaction dates (TDate) from January 1, 2000 through December 31, 2009. We will purge all data from the Year 2000.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
| USE [SandBox] -- This is a my test bed database GO -- Create the test table create tabledbo.LAPTest ( AccountID int , Amount money, TDate datetime ) create clustered index IX_LAPTest_TDate on dbo.LAPTest ( TDate asc ) GO --===== Build the table 100000rows at a time to "mix things up" DECLARE @CounterINT SET @Counter = 0 WHILE @Counter < 20000000 BEGIN --===== Add 100000rows to the test table INSERT INTO dbo.LAPTest( AccountID, Amount, Date ) SELECT TOP 100000 AccountID = ABS (CHECKSUM(NEWID()))%50000+1, Amount = CAST (CHECKSUM(NEWID())%10000 /100.0 AS MONEY), Date = CAST (RAND(CHECKSUM(NEWID()))*3653.0+36524.0ASDATETIME) FROM master.dbo.syscolumns t1 CROSS JOINmaster.dbo.syscolumns t2 --===== Increment the counter SET @Counter = @Counter + 100000 END GO --===== Build the table 100000rows at atime to "mix things up" |
SQL Server 2000
The first code snippet I will write will be for deleting the records from table in SQL Server 2000. Although this code will also work in SQL Server 2005, it will not work in future versions of SQL Server. For that reason, I will rewrite the code I develop for SQL Server 2000 to use the new capabilities available in SQL Server 2005.I have made every effort to make this code generic, and it should work with minimal changes. As it may be used in databases that are set to use either the SIMPLE, BULK LOGGED, or FULL recovery models; I have included conditional logic to perform a transaction log backup if the database is not using the SIMPLE recovery model.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
| declare @recoverymodel varchar (12), @dbname sysname, @backupdevice sysname, @batchsize int , @loopcnt int , @ year smallint , @backupfile nvarchar(512) select @recoverymodel = cast (DATABASEPROPERTYEX ( db_name() , 'Recovery' ) as varchar (12)), @dbname = db_name(), @backupdevice = 'D:\Backups\', @batchsize = 10000, @loopcnt = 0, @year = 2000 set rowcount @batchsize -- used for SQL Server 2000 to limit rows affected while @batchsize <> 0 begin delete from dbo.LAPTest where TDate >= dateadd(yyyy, (@year - 1900), 0) and TDate < dateadd(yyyy, (@year - 1900) + 1, 0) set @batchsize = @@rowcount if @batchsize > 0 and @recoverymodel <> ' SIMPLE ' begin set @loopcnt = @loopcnt + 1 set @backupfile = @backupdevice + @dbname + N' _ ' + cast(@loopcnt as nvarchar(8)) + N' .trn ' backup log @dbname to disk = @backupfile with description = N' T-Log Backup during purge' end end set rowcount 0 -- reset rows affected after processing, SQL Server 2000 |
SQL Server 2005
Now we will take the code above and modify it to work using the new features introduced in SQL Server 2005. This will also allow this code to be used in SQL Server 2008 and future versions as well. As you look at the code you will very little difference between the two versions. What are missing from the SQL Server 2005 version are the set rowcount statements and the addition of the TOP clause in the delete statement.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
| declare @recoverymodel varchar (12), @dbname sysname, @backupdevice sysname, @batchsize int , @loopcnt int , @ year smallint , @backupfile nvarchar(512) select @recoverymodel = cast (DATABASEPROPERTYEX ( db_name() , 'Recovery' ) as varchar (12)), @dbname = db_name(), @backupdevice = 'D:\Backups\', @batchsize = 10000, @loopcnt = 0, @year = 2000 while @batchsize <> 0 begin delete top (@batchsize) from dbo.LAPTest where TDate >= dateadd(yyyy, (@year - 1900), 0) and TDate < dateadd(yyyy, (@year - 1900) + 1, 0) set @batchsize = @@rowcount if @batchsize > 0 and @recoverymodel <> ' SIMPLE ' begin set @loopcnt = @loopcnt + 1 set @backupfile = @backupdevice + @dbname + N' _ ' + cast(@loopcnt as nvarchar(8)) + N' .trn ' backup log @dbname to disk = @backupfile with description = N' T-Log Backup during purge' end end |
Variations
Now that you have the basic generic code to delete large volumes of data from a table, you may decide that you don't want 200+ transaction log backup files should you have to do a restore to a point in time. If your database is not using the SIMPLE recovery model, you could modify the conditional used in the IF statement to decide if the transaction log backup should be run. One change you could make is to only run the transaction log backup if (@loopcnt modulo (some value) = 0 and @batchsize > 0) OR (@batchsize = 0). In addition to this logic, you could also add a differential backup after the purge is run as well.Conclusion
With these sample code snippets, I hope you can see how easily you can batch your deletes allowing the table to remain relatively open for continued use, and at the same time keep your transaction log files manageable when your database is using the BULK LOGGED or FULL recovery model.Source: http://www.sqlservercentral.com/articles/T-SQL/67898/
Subscribe to:
Posts (Atom)