OsakaWebbie
Programmer
I will be making multiple copies of this structure, so little decisions like this will add up. My database has 24 tables, and all but one of them might have multibyte characters entered in them, so the tables are UTF8 with either utf8_general_ci or utf8_unicode_ci collation, whichever seemed to make the most sense. But one little reference table only holds MIME encoding info, so it will have nothing but ASCII text and a couple of booleans.
So my question is: Is it better for all the tables in a database to use the same character set (and just vary at the column level sometimes), or if even one table can be ASCII it is best to make it so? In other words, if all the rest of the tables are UTF8, which of these would fit better into that environment?
So my question is: Is it better for all the tables in a database to use the same character set (and just vary at the column level sometimes), or if even one table can be ASCII it is best to make it so? In other words, if all the rest of the tables are UTF8, which of these would fit better into that environment?
Code:
CREATE TABLE IF NOT EXISTS `uploadtype` (
`Extension` varchar(8) character set ascii NOT NULL,
`MIME` varchar(100) character set ascii NOT NULL,
`BinaryFile` tinyint(1) NOT NULL default '1',
`InBrowser` tinyint(1) NOT NULL default '0',
PRIMARY KEY (`Extension`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
Code:
CREATE TABLE IF NOT EXISTS `uploadtype` (
`Extension` varchar(8) NOT NULL,
`MIME` varchar(100) NOT NULL,
`BinaryFile` tinyint(1) NOT NULL default '1',
`InBrowser` tinyint(1) NOT NULL default '0',
PRIMARY KEY (`Extension`)
) ENGINE=MyISAM DEFAULT CHARSET=ascii;