1. | Click Start, and then click Control Panel. |
2. | Click Performance and Maintenance, and then click Scheduled Tasks. |
3. | Double-click Add Scheduled Task. The Scheduled Task Wizard starts. |
4. | Click Next. |
5. | Under Click the program you want Windows to run, click Browse. |
6. | In the Select Program to Schedule dialog box, locate the %SystemRoot%\System32 folder, locate and click the Shutdown.exe file, and then click Open. |
7. | Under Perform this task, specify a name for the task and how frequently you want this task to run, and then click Next. |
8. | Under Select the time and day you want this task to start, specify a start time and date for the task, and then click Next. |
9. | Type the user name and password to run this task under, and then click Next. |
10. | Click to select the Open advanced properties for this task when I click Finish check box, and then click Finish. |
11. | Click the Task tab. |
12. | In the Run box, specify any additional parameters that you want to use with Shutdown.exe. |
13. | Click OK |
Monday, October 29, 2007
[windows] Shutdown Windows Automatically
Tuesday, October 23, 2007
Recommended Add-ons for Mozilla Thunderbird
- XNote
Persistent sticky notes for Thunderbird associated to mails. - Clippings
Saves and manages frequently-entered text for later pasting into forms and input fields without the hassle of retyping or repetitive copying and pasting. Perfect for forms, web mail, blogging, wiki editing and forum posts. - ReminderFox
An extension that displays and manages lists of date-based reminders and ToDo's. ReminderFox does not seek to be a full-fledged calendar system. In fact, the target audience is anybody that simply wants to remember important dates (birthdays, anniversaries, etc) without having to run a fat calendar application. - Auto Zip Attachments
Allows you to easily compress (zip) attachments. A new button named "AutoZip" is added to the compose toolbar.
Upon pressing the button all the attachments would be compressed into a single file. Then the attachments are replaced with the zipped file. - Attachment Extractor
Just select the messages containing the attachments you want to extract and select 'Extract Attachments' and this extension does the rest. No more having to go through each message to extract the attachments! - ConfirmFolderMove
Introduces a preference that allows users to be prompted before moving folders. - Enigmail
Enigmail adds OpenPGP message encryption and authentication to your email client. It features automatic encryption, decryption and integrated key management functionality. Enigmail requires GnuPG (www.gnupg.org) for the cryptographic functions.
Note: GnuPG is not part of the installation. - NotToForward
Enables you to avoid sending mail to specific recipients. - SenderFace
Enables you to associate images (faces) with Email senders (from field).
Recommended Add-ons for FireFox
- WebDeveloper
Adds a menu and a toolbar with various web developer tools. - FireBug
Firebug integrates with Firefox to put a wealth of development tools at your fingertips while you browse. You can edit, debug, and monitor CSS, HTML, and JavaScript live in any web page. - GSpace
This extension allows you to use your Gmail Space (2.8 GB and growing) for file storage. It acts as an online drive, so you can upload files from your hard drive and access them from every Internet capable system. The interface will make your Gmail account look like a FTP host. - FlashGot
Download one link, selected links or all the links of a page together at the maximum speed with a single click, using the most popular, lightweight and reliable external download managers. Supported download tools are dozens, see http://flashgot.net for details. This extension offers also a Build Gallery functionality which helps to collect in a single page serial movies and images scattered on several pages, for easy and fast "download all". - Clipmarks
With Clipmarks, you can clip the best parts of web pages. Whether it's a paragraph, sentence, image or video, you can capture just the pieces you want without having to bookmark the entire page. - iMacros
The Web browser is probably the most frequently used software today, and many tasks are highly repetitious. iMacros for Firefox relieves the repetition of checking on the same sites every day, remembering passwords and filling out web forms. Web professionals can use the software for functional testing and regression web testing of web applications. Another use is to automate web scraping, for example online stock quotes or web store prices. - VideoDownloader
Download videos from Youtube, Google, Metacafe, iFilm, Dailymotion and other 60+ video sites ! And all embedded objects on a webpage (movies, mp3s, flash, quicktime, etc) ! Directly.
Friday, October 5, 2007
[windows] Shutdown Windows Using Script
Sub ShutDown()
Dim Connection, WQL, SystemClass, System
'Get connection To local wmi
Set Connection = GetObject("winmgmts:root\cimv2")
'Get Win32_OperatingSystem objects - only one object In the collection
WQL = "Select Name From Win32_OperatingSystem"
Set SystemClass = Connection.ExecQuery(WQL)
'Get one system object
'I think there is no way To get the object using URL?
For Each System In SystemClass
System.Win32ShutDown (2)
Next
End Sub
Value | Meaning |
---|---|
0 | Log Off |
0 + 4 | Forced Log Off |
1 | Shutdown |
1 + 4 | Forced Shutdown |
2 | Reboot |
2 + 4 | Forced Reboot |
8 | Power Off |
8 + 4 | Forced Power Off |
Simple extension to shutdown remote computer.
Sub ShutDownEx(Server, User, Password)http://www.motobit.com/tips/detpg_wmi-windows-system-shutdown/
Dim Connection, WQL, SystemClass, System
'Get connection To remote wmi
Dim Locator
Set Locator = CreateObject("WbemScripting.SWbemLocator")
Set Connection = Locator.ConnectServer(Server, "root\cimv2", User, Password)
'Get Win32_OperatingSystem objects - only one object In the collection
WQL = "Select Name From Win32_OperatingSystem"
Set SystemClass = Connection.ExecQuery(WQL)
'Get one system object
'I think there is no way To get the object using URL?
For Each System In SystemClass
System.Win32ShutDown (2)
Next
End Sub
Wednesday, October 3, 2007
[mssql] 4 Simple Rules for Handling NULLs
Introduction
"As we know,
There are known knowns.
There are things we know we know.
We also know
There are known unknowns.
That is to say
We know there are some things
We do not know.”- Donald Rumsfeld
In a recent article by James Travis, the handling of NULLs in SQL Server was explored. I discovered some inaccuracies in the article; some caused by inaccuracies in SQL Server Books Online, others rooted in trying to apply other standard 3GL and 4GL programming language two-valued logic to the three-valued logic employed by ANSI SQL.
This article was written to explain ANSI SQL three-valued logic and correct the inaccuracies presented
What is NULL?
First, let's start by defining what a NULL is in SQL-speak. ANSI-SQL defines NULL as "a special value, or mark, that is used to indicate the absence of any data value." (ANSI-92) NULL in SQL is not a "data value", but rather an indicator that we have missing data. As an example, we’ll look at the following table which contains last name, first name and middle name.
Last_Name | First_Name | Middle_Name |
'Smith' | 'Linda' | 'Anita' |
'Washington' | 'Larry' | '' |
'Jones' | 'John' | NULL |
In this example, we know that Linda Smith has a middle name, and we know that it is 'Anita'. We have populated Larry Washington's Middle_Name column with a Zero-Length String ('') indicating that we know for a fact Larry has no middle name. These are the "known knowns" in Rumsfeld-speak.
In the case of John Jones, however, we have set the Middle_Name column to NULL. This indicates that we do not know whether John has a middle name or not. We may discover, in the future, that John has a middle name; likewise, we may discover he has no middle name. In either case, we will update his row accordingly. So, NULLs represent our "known unknowns"; the only thing we know for a fact is that we do *not* know whether John has a middle name or not.
Because of the inherent issues surrounding NULLs and data integrity, three-valued logic, scalar arithmetic and performance, DBA's and Database Designers tend to try to minimize the use of NULLABLE columns in their tables.
Rule #1: Use NULLs to indicate unknown/missing information only. Do not use NULLs in place of zeroes, zero-length strings or other "known" blank values. Update your NULLs with proper information as soon as possible.
Three-Valued Logic
Always in the Top 10 List of the most confusing aspects of ANSI SQL, for those coming from other 3GL and 4GL languages is three-valued logic. Anyone coming from a background in most other computer languages will instantly recognize the two-valued logic table:
Two-Valued Logic Table | |||
---|---|---|---|
p | q | p AND q | p OR q |
True | True | True | True |
True | False | False | True |
False | True | False | True |
False | False | False | False |
And of course the logic for NOT is fairly simple:
Two-Valued NOT Logic Table | |||
---|---|---|---|
p | NOT p | ||
True | False | ||
False | True |
So what is Three-Valued Logic? In SQL three-valued logic, we introduce the concept of Unknown into our logic, which is closely associated with NULL. Our SQL Logic Table looks like this:
Three-Valued Logic Table | |||
---|---|---|---|
p | q | p AND q | p OR q |
True | True | True | True |
True | False | False | True |
True | Unknown | Unknown | True |
False | True | False | True |
False | False | False | False |
False | Unknown | False | Unknown |
Unknown | True | Unknown | True |
Unknown | False | False | Unknown |
Unknown | Unknown | Unknown | Unknown |
And in SQL logic, the NOT logic table looks like this:
Three-Valued NOT Logic Table | |
---|---|
p | NOT p |
True | False |
False | True |
Unknown | Unknown |
So how do we arrive at Unknown in these logic tables? Simply put, we try to compare NULLs to data values, or other NULLs. Per the ANSI-92 SQL Standard, comparisons with NULL always result in Unknown.
Try the following SQL Query in Query Analyzer (requires the Northwind Sample Database):
SET ANSI_NULLS ONThis query returns four employee records. Now try this query:
SELECT * FROM [Northwind].[dbo].[Employees]
WHERE [Region] IS NULL
SET ANSI_NULLS ONThis query returns no records. The reason for this is that '[Region] = NULL' evaluates to Unknown every time. Since it never evaluates to True for any rows, it returns no records.
SELECT * FROM [Northwind].[dbo].[Employees]
WHERE [Region] = NULL
NOTE: It has been widely misstated (even in SQL Server Books Online) that "the result of a comparison is false if one of the operands is NULL." In fact, the ANSI-92 SQL Standard states that if one of the operands is NULL, the result of the comparison is "UNKNOWN". (ANSI-92 SQL Standard) SQL Books Online has recently been updated to reflect the accurate ANSI Standard concerning NULL comparisons (See the update here: Microsoft ANSI_NULLS).
NULL Versus NULL
NULLs are not equal to other NULLs. In his article, Understanding the Difference Between "IS NULL" and "=NULL", James Travis states that a comparison to a variable initialized to NULL returns False. And it would appear so by his example:DECLARE @val CHAR(4)Mr. Travis' example prints 'FALSE', which indicates that the result of the comparison '@val = NULL' is False. This is not the case, however. The result of '@val = NULL' is Unknown. The IF statement will print 'TRUE' if, and only if, the result of the comparison is True; otherwise it falls through to the ELSE statement and prints 'FALSE'. For three-valued logic, we need three comparisons to figure out what the actual result of the '@val = NULL' comparison is:
SET @val = NULL
SET ANSI_NULLS ON
If @val = NULL
PRINT 'TRUE'
ELSE
PRINT 'FALSE'
SET ANSI_NULLS ONNow we see that the result of the comparison is neither True nor False. It is the third value in our three-valued logic, Unknown. Try this statement to demonstrate that NULL is not equal to other NULLs:
DECLARE @val CHAR(4)
SET @val = NULL
SET ANSI_NULLS ON
If @val = NULL
PRINT 'TRUE'
ELSE IF NOT(@val = NULL)
PRINT 'FALSE'
ELSE
PRINT 'UNKNOWN'
SET ANSI_NULLS ONThis statement prints 'UNKNOWN' because 'NULL = NULL' evaluates to Unknown. Since 'NULL = NULL' evaluates to Unknown, if we look back at the NOT Logic table for three-valued logic, we see that 'NOT(Unknown)' also evaluates to Unknown. This means that 'NOT(NULL = NULL)' is Unknown as well. So what do we expect when we run this statement?
IF NULL = NULL
PRINT 'TRUE'
ELSE IF NOT (NULL = NULL)
PRINT 'FALSE'
ELSE
PRINT 'UNKNOWN'
SET ANSI_NULLS ONIf you expect to see ‘TRUE’, guess again! ‘FALSE’? Not a chance. Remember what we said earlier – any comparisons with NULL evaluate to Unknown (neither True nor False). Since the comparisons in the IF statements evaluate to Unknown, ‘UNKNOWN’ is printed in this instance.
IF NULL <> NULL
PRINT 'TRUE'
ELSE IF NOT (NULL <> NULL)
PRINT 'FALSE'
ELSE
PRINT 'UNKNOWN'
The same is true for any comparisons with NULL under the ANSI Standard; whether you are comparing NULL to a CHAR, INT or any other value, variable or table column.
Rule #2: In ANSI SQL, NULL is not equal to anything, even other NULLs! Comparisons with NULL always result in UNKNOWN.
NULL Variables
Variables that have been declared, but not yet initialized via SET statement default to NULL. They are treated exactly like any other NULL values. We can modify the previous queries to see this for ourselves:SET ANSI_NULLS ONThis returns no rows, just like in the previous query with '=NULL' in the WHERE clause. This has exactly the same effect as the following:
DECLARE @MyRegion NVARCHAR(15)
SELECT * FROM [Northwind].[dbo].[Employees]
WHERE [Region] = @MyRegion
SET ANSI_NULLS ONNOTE: You cannot put a variable in place of NULL in the IS NULL clause. The following query will *not* work:
DECLARE @MyRegion NVARCHAR(15)
SET @MyRegion = NULL
SELECT * FROM [Northwind].[dbo].[Employees]
WHERE [Region] = @MyRegion
SET ANSI_NULLS ON
DECLARE @MyRegion NVARCHAR(15)
SELECT * FROM [Northwind].[dbo].[Employees]
WHERE [Region] IS @MyRegion
ANSI Versus Microsoft
As we described above, the ANSI SQL Standard describes the result of any comparison with NULL as Unknown. So NULL = NULL, NULL <> 10 and NULL < ‘Hello’ are all Unknown.Presumably to make the transition smoother for 3GL and 4GL programmers from other languages moving over to SQL, Microsoft implemented the SET ANSI_NULLS OFF option, which allows you to perform = and <> comparisons with NULL values. To demonstrate this in action, we can perform the following query:
SET ANSI_NULLS OFFThis allows you to perform basic equality comparisons to NULL; however, SQL written using this option may not be portable to other platforms, and might confuse others who use ANSI Standard NULL-handling. Also, if you try to perform this query without turning ANSI_NULLS OFF first, you will get no rows returned. The safe bet is to use SET ANSI_NULLS ON.
SELECT * FROM [Northwind].[dbo].[Employees]
WHERE [Region] = NULL
Rule #3: Use SET ANSI_NULLS ON, and always use ANSI Standard SQL Syntax for NULLs. Straying from the standard can cause problems including portability issues, incompatibility with existing code and databases and returning incorrect results.
NULL Math
One final point concerns math and string operations with NULLs. If you perform scalar math operations and string concatenation functions with NULL, the result is always NULL. For instance, this query returns NULL:SELECT 1 + NULLA numeric value plus, minus, divided by, or multiplied by NULL always equals NULL. A string concatenation with NULL equals NULL as well:
SELECT 'Joe' + NULLSo what if you want to treat NULL as a zero in a math operation? Use SQL's COALESCE() function:
SELECT 1 + COALESCE(NULL, 0)The COALESCE() function returns the first non-NULL value in its list of values. So in the statement above, "COALESCE(NULL, 0)" returns 0. The SELECT statement above returns 1. Similarly you can use COALESCE() in string concatenations:
SELECT 'Joe' + COALESCE(NULL, '')The COALESCE() function in this case returns a Zero-Length String, which is appended to 'Joe'. The end result is that 'Joe' is returned instead of NULL. SQL Server provides an additional function called ISNULL() which performs similarly to COALESCE(). Use the COALESCE() function (or alternatively use CASE), as it is ANSI standard syntax.
Rule #4: The ANSI Standard COALESCE() and CASE syntaxes are preferred over ISNULL() or other proprietary syntax.
Conclusions
SQL NULLs and three-valued logic can be confusing for a new SQL Database Designer; particularly if you are coming from a background in another 3GL or 4GL language that uses two-valued logic (i.e., C++, VB, etc.) Learning when and how to use NULLs in your databases, how SQL handles NULLs, and the results of comparisons and math/string operations with NULLs in SQL is very important in order to obtain consistent and optimized results.For further information, you can visit:
What is the Best Method to Back up Data?
Some Methods of Obtaining Backups
The method, or rather, the medium chosen for backing up data will depend on the volume of data to be stored, as well as the nature of the data. Some common methods are being enlisted below:
CD and Floppy: The floppy has largely gone out of use by now, due to its many limitations, small capacity and security problems. The CD is a good choice for obtaining backups, and is one of the favourite choices in both domestic and corporate spheres. It is cheap, readily available, portable and compatible with several file formats. Storing a CD and sharing information from it are also quite simple.
DVD: The DVD is a good choice for storing audio visual information. It may not be as cheap as the CD, but offers better quality, often lasts longer, and has way more space. DVD RWs can be used in such a way that literally every bit is used to store information. A DVD also has the same facilities when it comes to sharing and easy transporting. However, the DVD RW, many market observers feel, may turn out to be an intermediate technology and be replaced by something superior but very similar in function, or may evolve in such a way that the problems would get solved. It must be noted here that the constant research in this regard, and inventions like the Blue Ray disk etc. point out that we can hope for a better technology soon.
USB Devices: These are the new favourites all over the world. The greatest proof of the popularity of this technology lies in two facts. Firstly, the storage space of pen drives / flash drives/ memory sticks is constantly increasing. Secondly, their prices are plummeting downwards rapidly. Almost all kinds of files can be stored on these; they are very easy to carry, do not require a separate software or booting up of the machine to start functioning, and it is very convenient to share the data stored on them. For storing smaller music libraries, an iPod may be an option too.
External Hard Drives: For larger data storage requirements, external hard drives are excellent solutions. Arrays or stacks of hard drives are available for corporate purposes and are provided by all good hard drive manufacturers. Seagate, Maxtor, ASB, LaCie all make external hard drives of varying capacities and prices.
Online and Offline Storage Spaces: These are ‘spaces’ provided by professional firms, and are extremely secure for sensitive data storage.
Tapes and Printouts: Traditional storage should not be left out of the list. There is some information that one simply needs to see on the good old piece of paper. Tape has been in use for a long time now, and will probably continue for some time.
By: Jammesh Wallsh
http://saveav.blogspot.com/2007/09/what-is-best-method-to-back-up-data.html
Ensuring Data Protection
Think about what you experience when you lose important data. You cannot even sleep properly if your data is lost. It is thus essential to have regular backups done.
Nobody wants their personal and confidential information to be disclosed over the Internet but unfortunately with the growing use of computers these days, data is no longer secure. The size of your organization or business is hardly a concern. It doesn't matter if you are using your computer at home or in office. The only concern is data protection. It is a great challenge in itself to protect confidential data. It is also important to be aware of what exactly are the reasons behind these data loss.
Various tools are available in the market these days in order to protect your valuable data. Some of these are backups, firewalls and various protection software packages for viruses.
Negligence by an employee is one of the other reasons for data leakage. Thus, it is considered therefore that employees can prevent a major portion of this data loss. If the employee is equipped with basic knowledge about computers and are honest while performing their tasks, lots of important data can be prevented from getting lost. If some strong security system is not implemented, employees generally take this for granted and work carelessly. This may mean that confidential data is lost.
Here are some methods that would help to protect your data:
Encryption: You can protect your data using a technique called encryption. In this technique, data is encrypted or is converted into some code, which can be decoded by only those who know the key or the related password. These days with the availability of so many data encryption software in the market, you can choose any one of them to ensure safe transaction of data.
Backups: Backing up your data is one of the tools that you should always use in order to avoid any sudden data loss. It is most effective in case of data loss due to a sudden power failure. If you are performing regular backups, these situations can easily be avoided. There is no specified interval during which you are expected to perform a backup. However, once in a week is the least limit, though thrice a week is normally advised to perform these backups. The best thing would be to ensure back up on a daily basis. Confidentiality of your data is the only criteria to set the limit for these backups.
Firewalls: A layered type of system is followed when it comes to security. An operating system may also serve as protector of data or else a firewall may be used. Through firewalls, nobody can access your system without your permission. This way, your data becomes much safer. It is recommended to update your computer at regular intervals to maintain security in a more consistent manner.
With this helpful information, you can ensure that your data is protected and not lost.
http://saveav.blogspot.com/2007/09/ensuring-data-protection.html
Top 10 Ways To Make A Computer Run Faster
1. Free Up Disk Space
A great way to make a computer run faster is to free up some disk space. Windows comes with a disk cleanup tool that helps you free up space on your hard disk. The utility identifies files that you can safely delete without messing anything up, and then enables you to choose whether you want to delete some or all of the identified files.
How to use disk cleanup tool.
* Click Start, point to All Programs, point to Accessories, point to System Tools, and then click Disk Cleanup. If several drives are available, you might be prompted to specify which drive you want to clean.
* In the Disk Cleanup for dialog box, scroll through the content of the Files to delete list. (If you have multiple hard drives you will have to select which one to scan.)
* Clear the check boxes for files that you don't want to delete, and then click OK.
* When prompted to confirm that you want to delete the specified files, click OK.
2. Use Defrag Tool
When you add a file or a new program to a new computer, the hard drive is relatively empty so new data is saved to the hard drive in one whole block. When you need to use that information, the computer can quickly access it because it is all in one place. Defragging your system is definitely a way to make a computer run faster.
How to use defrag tool.
* From the start menu point to "all programs"
* Point to "Accessories"
* Point to "System Tools"
* Click on "Disk Defragmenter"
* The disk defragmenter will display the hard drives on your computer. Just select and click Defragment.
3. Get rid of spyware
Definition
Spyware is software with malicious intent - by design; it does something bad to your computer. Usually, it gets installed without your knowledge. It sometimes takes advantage of the fact that most people click "I Agree" to software licenses without reading them.
The only way to get rid of spyware is to have some kind of scanner, sort of like an anti-virus. In order to make a computer run faster and keep it safer your going to need an anti-spyware program. This is tricky, a lot of the spyware programs out there are malicious and will actually add viruses and spyware to your computer by using the "spyware scanner and blocker". So please remember to be very careful when choosing your anti-spyware program.
4. Find and repair disk errors
Detect and Repair Disk Errors
In addition to running Disk Cleanup and Disk Defragmenter to optimize the performance of your computer, you can check the integrity of the files stored on your hard disk by running the Error Checking utility.
How to run the error checking utility.
* Click Start, and then click My Computer.
* In the My Computer window, right-click the hard disk you want to search for bad sectors, and then click Properties.
* In the Properties dialog box, click the Tools tab.
* Click the Check Now button.
* In the Check Disk dialog box, select the Scan for and attempt recovery of bad sectors check box, and then click Start.
* If bad sectors are found, choose to fix them.
Tip: Only select the Automatically fix file system errors check box if you think that your disk contains bad sectors.
5. Get an external hard drive
Adding an External Hard Drive to your computer will allow you to clear out a lot of the things you don't need to be storing on your normal hard drive.
A great example would be your music or pictures. All those songs and pictures are sitting on your computer making it go slower. You will see a decline in your computers performance as soon as you use over half of your hard drive space.
Moving all your songs or pictures to your external hard drive will free up a lot of space on your main hard drive. You can even move applications such as photoshop or games.
An external hard drive is also handy because it is portable. You can unplug it from your computer and take it to a friends house and plug it in there, sharing photos and pictures very easily.
This is truly a great way to make a computer run faster.
By : Brooks Carver
http://saveav.blogspot.com/2007/09/top-10-ways-to-make-computer-run-faster.html
[vb6] InputBox is OK or Cancel?
You see, if the user presses cancel, vbNullString is returned. However if they press OK, the empty string ("") is sent back. But in Visual Basic, vbNullString equates to an empty string, so you can't compare the two - even though in reality, they're completely different.
However you can use the StrPtr ('string pointer') function to determine whether the return string is indeed a vbNullString - as by definition, a vbNullString 'pointer' is zero.
Private Sub Form_Load()
Dim strInput
strInput = InputBox("Enter something:")
If StrPtr(strInput) = 0 Then
MsgBox "You pressed Cancel button!"
ElseIf strInput = "" Then
MsgBox "You press OK button with blank value!"
End If
End
End Sub
Tuesday, October 2, 2007
[mssql] Getting directory path
DECLARE @cPath varchar(100),
@cDir varchar(100)
SET @cPath = 'D:\TEMP\test.txt'
SET @cDir = (CASE WHEN charindex('\', @cPath) = 0
THEN ''
ELSE left(@cPath, len(@cPath) - charindex('\', reverse(@cPath)))
END)
SELECT @cDir
[vb6] Get the Random Number
Private Function GetRandomVal(ByVal pLength%, Optional ByVal pSequence&) As String
If pSequence = 0 Then Randomize Else Randomize pSequence
GetRandomVal = _
Right(String(pLength - 1, "0") & _
Format(CSng(((10 ^ pLength) - 1 + 1) * Rnd + 1) + pSequence, _
String(pLength, "#")), pLength)
End Function
Private Sub Form_Load()
MsgBox GetRandomVal(4) & vbNewLine & _
GetRandomVal(5, 1) & vbNewLine & _
GetRandomVal(5, 2) & vbNewLine
End Sub
Monday, October 1, 2007
[mssql] Indexes and Performance
SQL Server - Indexes and Performance
By Steve Jones
Introduction
One of the keys to SQL Server performance is ensuring that you have the proper indexes on a table so that any queries written against this table can run efficiently. There are more articles written about designing indexes, choosing columns, etc for optimizing performance, so I will refrain from repeating most of what is written elsewhere. I have included a few resources at the end of this article for this topic.
However once you have built the indexes, there is still work to be done. As your data sets grow over time, SQL Server will continue to rebuild indexes and move data around as efficiently as possible. This happens in a number of ways, but the result is that you may need to perform maintenance on your indexes over time despite all of the automatic tools built into SQL Server. This article will discuss some of the issues with data growth over time as well as a technique to find tables in need of maintenance and how to perform this maintenance.
NOTE: I am including some features of SQL 2000 in this article, but the techniques are still useful in SQL v7. and v6.5 with minor modifications.
What happens over time?
If SQL Server includes auto statistic updating, a query optimizer that can learn to be more efficient with your queries, etc., why do we need to perform maintenance? Well, let's examine what happens over time.
When you build an index on a table (let's assume a clustered index), SQL Sever parcels the data across pages and extents. With v7.x and above, extents can be shared between objects (with v6.5 extents contain a single object). As a result, let's assume you create a table with rows that are <>
If you assume that we expand this example over time, we may grow to 100 pages of data. These (at a minimum) require 7 extents if this object does not share any extents. Each page within the extents links to another page with a pointer. The next page in the chain, however, may not be in the same extent. Therefore as we read the pages, we may need to "switch" to another extent.
The simplest example is assume we take 3 consecutive pages of data in the following order:
Extent 1 Extent 2
Page n Page n + 1
Page n + 2
These are any three pages where page n links to page n+1 next, then to page n+2 and so on. To read these three pages we read extent 1, then switch to extent 2, then switch back to extent 1. These "switches" do not necessarily entail physical I/O, but all of these switches add up. They may not be a big deal on your local server or even a lightly loaded server, a web application that has hundreds or thousands of users could see a large performance impact from repeated scans of this table. Why does the table end up looking like this? This is how the table is designed to function over time. SQL Server will allocate space for each row based on the space available at that time. As a result, while a clustered index stores the data in physical order on a page, the pages may not be in physical order. Instead each page has a linkage to the next page in the sequence. Just as your hard disk can become fragmented over time as you delete and insert files, the allocations of pages for a table can be fragmented over time across extents as the data changes.
So why doesn't SQL Server just rebuild the indexes? I am not sure if I would even want it to do so. I would hate for this to occur right after a large web marketing campaign! Instead the engineers in Redmond have left it up to the DBA to track this fragmentation and repair it as necessary. How do we remove this fragmentation? Read on...
Running DBCC SHOWCONTIG
Prior to SQL Server 2000, you had to first get the object ID using the following command
select object_id('object-name')
For the user table, I ran
select object_id('user')
This returned me some long number (from sysobjects) that means nothing to me, but the SQL team in Redmond must use this often and did not feel like including the join in their code. I guess someone complained long and loud enough because in SQL 2000 you can use the name of the object in dbcc showcontig like this:
dbcc showcontig (user)
This produces the following statistics on your indexes:
DBCC SHOWCONTIG scanning 'User' table...
Table:'User' (962102468); index ID: 1, database ID: 7
TABLE level scan performed.
-Pages Scanned................................: 899
-Extents Scanned..............................: 121
-Extent Switches..............................: 897
-Avg. Pages per Extent........................: 7.4
-Scan Density [Best Count:Actual Count].......: 12.58% [113:898]
-Logical Scan Fragmentation ..................: 99.89%
-Extent Scan Fragmentation ...................: 80.99%
-Avg. Bytes Free per Page.....................: 2606.5
-Avg. Page Density (full).....................: 67.80%
Let's decode this output:
Pages Scanned - Gives the # physical pages in the database scanned in this index. Not really relevant, but gives you the total size occupied by this index ( each page is 8k)
Extents scanned - An extent is 8 pages. So this should be pretty close to Pages Scanned / 8. In this example we have 121 extents which is 968 pages. Since the index is only 899 pages, we have a number of shared extents. Not necessarily a bad thing, but this gives you an idea that you are slightly fragmented. Of course, you do not know how much physical fragmentation this is which can contribute to longer query times. The minimum number for the 899 pages above would be 113. (899/8)
Extent Switches - # times the scan forced a switch from one extent to another. As this gets close to # pages, you have pretty high fragmentation. . If you see number close to # pages, then you may want to rebuild the index. See a Detailed Example.
Average Pages/Extent - Gives the math of Pages Scanned / Extents Scanned. Not of any great value other than you don't have to run Calculator to get the number. Fully populated extents would give a value of 8 here. I guess this is good for me
Scan Density [Best Count:Actual Count].......: 12.58% [113:898]
This is the tough one. This shows a percentage and 2 numbers separated by a colon. I explain this as I missed it the first two times around. The percentage is the result of dividing number 1 (113) by number 2 (898). So what are the two numbers?
The first number is the ideal number of extent changes if everything was linked in the a contiguous chain. The second number is the number of extents moved through which is 1 more than the number of extent switches (by definition). This is really another view of fragmentation. 100% would be minimal (I hate to say zero) fragmentation. As you can see, this table is fairly fragmented. The scan is constantly switching back and forth from one extent to another instead of finding a link from one page to another within an extent.
Logical Scan Fragmentation ..................: 99.89%
The official definition from Books Online (v7.x and 2000 Beta 2) is:
"Percentage of out-of-order pages returned from scanning the leaf pages of an index. This number is not relevant to heaps and text indexes. An out of order page is one for which the next page indicated in an IAM is a different page than the page pointed to by the next page pointer in the leaf page."
I am still not sure what this means. I have not gotten a good explanation of this anywhere, so here is my best interpretation. This shows how many pages (as a percentage) in the index which have a pointer to the next page that is different than the pointer to the next page that is stored in the leaf (data) page. This is only relevant for clustered indexes as the data (leaf pages) should be physically in the order of the clustered index.
So how do you use this? If you figure it out, let me know. Since this number is high for me and other items lead me to think this index is fragmented, I think this is bad. So try for a low number in OLAP systems and a medium number in OLTP systems.
Extent Scan Fragmentation ...................: 80.99%
Again, here is the official BOL explanation (v7.x and 2000 Beta 2).
Percentage of out-of-order extents in scanning the leaf pages of an index. This number is not relevant to heaps. An out-of-order extent is one for which the extent containing the current page for an index is not physically the next extent after the extent containing the previous page for an index.
This shows the percentage of pages where the next page in the index is not physically located next to the current page. This tells me the I/O system must move fairly often (80% of the time) when scanning the index to find the next page. A Detailed Explanation is given below.
Avg. Bytes Free per Page.....................: 2606.5
This tells you (on average) how many bytes are free per page. Since a page is 8096 bytes, it appears that I have on average, filled about 68% of the pages. This can be good or bad. If this is an OLTP system with frequent inserts to this table, then with more free space per page, there is less likely going to be a page split when an insert occurs. You want to monitor this on tables with heavy activity and periodically rebuild this index to spread out the data and create free space on pages. Of course you do this during periods of low activity (read as 3am) so that there is free space and page splits are minimal during periods of high activity (when everyone can yell at you for a slow database). Since this is an OLTP system, I am in good pretty shape.
If this were an OLAP system, then I would rather have this be closer to zero since most of the activity would be read based and I would want the reads to grab as much data as possible from each page (to reduce the time it takes to read the index). As your OLAP table grows, this becomes more critical and can impact (substantially) the query time for a query to complete.
(build test data of 10,000,000 rows and test index of 99% v 1% fillfactor).
Avg. Page Density (full).....................: 67.80%
This gives the percentage based on the previous number (I calculated the number above as 1 - (2606.5 / 8096) and rounded up.
So what does this all mean?
Well, to me this says I need to defragment this table. There are a large number of extent switches that occur, each of which could potentially cause a large I/O cost to queries using this table and index.
Defragmenting Indexes
In previous versions, and I guess in this one you can still rebuild the clustered index which causes the server to read this clustered index and then begin moving the data to new extents and pages which should start putting everything back in physical order and reduce fragmentation. There is another way:
In SQL 2000, the SQL developers added another DBCC option which is INDEXDEFRAG. This can defragment both clustered and nonclustered indexes which (according to BOL) should improve performance as the physical order will match the logical order and (theoretically) reduce the I/O required by the server to scan the index.
A couple of caveats about this: If your index spans files, then it defragments each file separately and does NOT move pages between files. Not a good thing if you have added a new filegroup and allowed objects to grow across files. If
A good thing that is way, way, way, extremely, absolutely, without-a-doubt long overdue is the reporting of progress by DBCC INDEXDEFRAG as it works. Every 5 minutes this will report the estimated progress back to the user. Of course many of us who have installed software with a feedback progress bar often wonder why the bar moves quickly to 99% and remains there for 80% of the total install time. So time will tell whether this is of any use, but I think some feedback is better than none.
Another addition that is way, way, way, (you get the idea) overdue is the ability to stop the DBCC. I cannot tell you how many late nights I wished I could do this in v6.5. In fact I often held off on running DBCC until the latest possible time since I could not stop it once it started. (well, there was that O-N-O-F-F switch.)
Still one further addition, that ranks above the other two is that this is an online operation. Let me repeat that. Its an ONLINE operation. It does not hold locks on the table since it operates as a series of short transactions to move pages. It also operates more quickly than a rebuild of a new index and the time required is related to the amount of fragmentation for the object. Of course this means that you must have extensive log space if this is a large index. Something to keep in mind and watch the log growth when you run this to see how much space it eats up. Everything's a trade-off though.
Conclusion
Maintaining indexes still requires DBA intervention for optimal performance for your database server. How often do you have to maintain these indexes? That depends (like everything else in the RBDMS world) on your system. This is still an art and requires some practice, testing, and careful notes over time.
While Microsoft continues to improve this process over time and automate some of the work, it will probably still require some DBA intervention for the foreseeable future. Practice with tuning indexes is an often overlooked maintenance item, but one which will become more important to your users as their numbers swell.
Good luck and as always, please send me an email with comments, questions, or suggestions.
References
Here are some of the references for Indexing on the Internet and in publication. I have used these in the past, though not necessarily for this article. These are the ones I recommend.
- Professional SQL Server 7 Programming by Robert Vieira. Chapter 9 devoted to indexes. See my book review page for more information.
- The Gurus Guide To Transact SQL by Ken Henderson. See my book review page for more information.
- Index Optimizaton Tips by Alexander Chigrik on Swynk. A concise list of tips for building good indexes.
- Planning and Creating Indexes from Microsoft SQL Server v7.0 Database Implementation Training Kit. This is a good explanation of how indexes are structured in the server.
- Performance Tuning Guide from Microsoft. Everyone should read this at least once and refer to it whenever you are trying to improve performance.
- Most SQL Server books include basic information in indexes which is useful IF you read it. Indexing is definitely an art and you have to practice it to get better.
[mssql] Synchronize Database User with Login After Restoring
UPDATE sysusers
SET sysusers.sid = master..syslogins.sid
FROM sysusers, master..syslogins
WHERE sysusers.name = master..syslogins.loginname AND
sysusers.name <> 'dbo'
[mssql] Listing the Value of Certain Column
DECLARE @cList varchar(1000)
SELECT @cList = isnull(@cList + ',', '') + MyColumn
FROM MyTable (NOLOCK)
SELECT @cList --> show result
[mssql] Getting Length of A Text Field
Answer:
Just use the datalength function instead of the len function.
Example:
SELECT datalength(notes) FROM MyTable (NOLOCK)
Background:select len(notes) from Mytable
Gives me this error:Server: Msg 8116, Level 16, State 2, Line 1
Argument data type text is invalid for argument 1 of len function.
http://www.answermysearches.com/ms-sql-server-getting-length-of-a-text-field/138/